Uthana's motion models generate structured, physically grounded human motion — the kind of data that robotics and embodied AI systems need to learn from. Start from text or video, get motion that can be retargeted across body plans and exported into simulation, training, or control workflows.
The challenge
Humanoid and embodied AI systems need large, varied sets of human motion to learn from. But sourcing that motion through mocap or manual animation is slow and hard to scale.
How it works
Describe a motion or provide a video reference. Uthana generates structured humanoid motion, retargets it to your robot's body, and exports it in the format your stack expects.
Start from a text prompt or a monocular video. Generate motion from a description, or extract it from footage.
Uthana produces temporally coherent, full-body humanoid motion — structured for downstream retargeting and use in control or training workflows.
Map the motion onto your robot's kinematic structure. Uthana accounts for proportions, joint constraints, and contact behavior.
Output in the format your pipeline needs — CSV, URDF, MJCF — for simulation, imitation learning, evaluation, demos, or internal tooling.
Why Uthana
Faster motion data, broader behavioral coverage, body-aware output, and direct pipeline integration.
Speed
Create usable motion examples from text or video instead of relying solely on teleoperation, manual animation, or mocap cleanup.
Coverage
Generate or extract locomotion, transitions, recovery motions, gestures, and task-specific actions across a wider range than manual methods typically cover.
Body-aware output
Go beyond generic human motion. Retarget outputs to your robot's proportions, joint limits, and kinematic structure.
Integration
Bring motion generation and export directly into your data pipelines, simulators, evaluation tools, or labeling workflows via the API.
Get started
API access for text-to-motion and video-to-motion. Retargeting to custom body plans. Batch generation for dataset creation. Enterprise options for custom models, data isolation, and deeper integration.
Join us in co-developing the future of AI-driven animation.
Apply your game style to generic motions across your team's models.
Leverage the Uthana platform in your development pipeline or game environment.
Train bespoke models, or organize and label your data with our ML tools.