Video to motion

Record it once. Animate it instantly.

Upload any reference video of a person performing actions and Uthana extracts the motion into production-ready 3D animation data. No mocap suit. No studio. Just the movement you want, on the character you choose.

Try it free

Upload. Generate. Use.

1. Upload your video

Drop in a file or record from your phone. Any video with a single person and static camera works.

2. Motion is mapped

Uthana analyzes the video and generate accurate 3D motion data mapped to any character you choose and ready to preview instantly.

3. Use it anywhere

Download as FBX or GLB with editable keyframes. Import into your DCC or engine for your animation project.

Clean up, clip, and combine

Extracted motion data never needs to be used as-is. Trim to the frames you need, adjust bone positions, tweak speed, or blend the result with other motions — then download the motion for full control in your DCC.

‍

  • Clip to isolate the exact frames you want
  • Blend with text-generated or library motions
  • Download with full keyframe data for final polish

Use any video as your source

It's as easy as recording on your phone, on the go. As long as there's one person on screen and the camera stays still, Uthana can extract the motion.

‍

‍

  • Upload mp4, mov, or other common video formats
  • Paste a YouTube or social media URL directly
  • Works with phone camera selfies and screen recordings

Build it into your pipeline

Send videos to Uthana's API and receive 3D motion data back - programatically, without touching the browser. Convert reference libraries, build internal tools, or integrate directly into your production pipeline.

‍

  • Full support via the GraphQL API
  • Submit video files or URLs programatically
  • SDKs for Python, TypeScript, and cURL
FAQ
â–¶

What kinds of videos work best?

Any video with a single person performing the movement and a static (non-moving) camera. Phone recordings, screen captures, gameplay clips, and YouTube or social media links all work. For best results, make sure the full body is visible and the lighting is decent — heavy shadows or silhouettes can reduce extraction quality.

â–¶

Does the person in the video need to be wearing a mocap suit or markers?

No. Uthana extracts motion from ordinary videos. No suits, markers, or special equipment are required.

â–¶

How long can my video be?

Videos can be between 2-60 seconds long, 24-120 fps, and 300px to 4096px resolution.

â–¶

Is video-to-motion available through the API?

Yes! Video-to-motion is available via the Uthana API. You can call video-to-motion programmatically and access similar inference speed and download options as the web app.

â–¶

Can I apply the motion to my own custom character?

Yes. Uthana's IK retargeting automatically maps extracted motion to any bipedal rig — your custom models, our built-in characters, or a character you generate yourself with Uthana. No manual retargeting required.

â–¶

How accurate is the motion data compared to the original video?

Video-to-motion produces motion that closely matches the reference, and its quality is best when the video has a single character, good lighting, and a static camera. For performances that require frame-perfect fidelity, you can download the resulting motion and fine-tune the keyframes in your DCC.

â–¶

Can I use a video with multiple people in it?

Currently, videos should contain one person. If your reference has multiple people in frame, the extraction may not isolate the movement you want. Trim or crop your video to feature a single performer for the best results.

Studio & partner toolchain

Join us in co-developing the future of AI-driven animation.

Style transfers

Apply your game style to generic motions across your team's models.

Integrations

Leverage the Uthana platform in your development pipeline or game environment.

Data siloing

Train bespoke models, or organize and label your data with our ML tools.

Let's build to your needs

Get in touch