How fast is inference?
Most text-to-motion generations complete in under 10 seconds.
Can I edit a motion after I generate it?
After generation, any motion can be trimmed. Once downloaded, you can refine any motion’s keyframes in the DCC of your choice.
Are there advanced options available?
Yes! You can open advanced options under the prompt box to control generation settings, including:
• Foot contact (IK)
• Prompt enhancement
• Motion length (in seconds)
• Diffusion steps
These controls are useful when you want more consistency, more variation, or tighter control over the output.
Is text-to-motion available through the API?
Yes! Text-to-motion is available via the Uthana API. You can call text-to-motion programmatically, specify the model you’d like to use for generation, and access similar inference speed and download options as the web app.
How do I download or integrate a generated motion into my project?
When viewing any motion, click the .glb or .fbx button to download the animation file. Import the file into your preferred tool or engine as you normally would.
Why can’t my character fly or move like a dog?
Uthana’s text-to-motion models are trained primarily on human motion capture data, so they perform best on motions that follow realistic human movement.
Fantastical or physically unrealistic actions may not generate well today. We’re improving this over time, and one day pigs really will fly.
Can I generate motion without uploading my own character model?
Yes! Every Uthana account includes access to stock characters (Tar and Ava), as well as industry-standard rigs like Unreal Engine’s Manny and Quinn. These are perfect for testing and prototyping motions before applying them to your own models.
