Video-to-Motion 2.0 improves motion data, adds finger tracking to every capture, and is 3x faster than before.

Today we're launching Video-to-Motion 2.0 — a major upgrade to how you turn video reference into production-ready motion data. The new model delivers cleaner full-body capture, adds finger tracking to every capture, and generates motion three times faster than version 1.0.
If you've used Video-to-Motion before, you know the core idea: upload a video clip, get back clean motion data you can apply to your characters. 2.0 makes that pipeline significantly better across the board. Get started with a video now.

The biggest addition is hand and finger motion. Every capture now includes detailed finger data by default. The full-body quality has also improved substantially, with smoother joint rotations and fewer artifacts in fast or occluded movements.
Speed matters when you're iterating. Video-to-Motion 2.0 processes a six-second video into motion data in under a minute — over three times faster than video-to-motion 1.0. That means less waiting and more time spent actually working with your animation.

Once you've captured motion, applying it to your character should be the easy part. Video-to-motion 2.0 makes retargeting straightforward regardless of your character's proportions or rig setup. Upload your character, apply the motion, and you're working.
Video-to-Motion 2.0 is also available via our API, so you can integrate it directly into your existing pipeline.
Join us in co-developing the future of AI-driven animation.
Apply your game style to generic motions across your team's models.
Leverage the Uthana platform in your development pipeline or game environment.
Train bespoke models, or organize and label your data with our ML tools.