Tio Magic Animation Toolkit
Tio Magic Animation Toolkit is designed to simplify the use of open and closed-source video AI models for animation. The Animation Toolkit empowers animators, developers, and AI enthusiasts to easily generate animated videos without the pain of complex technical setup, local hardware limitations, and haphazard documentation.
This toolkit leverages Modal for cloud computing and runs various open/closed-source video generation models.
Supported Features
Image to Video






Interpolate



Pose Guidance



Text to Video



Supported Providers and Models
Modal
Image to Video
- Cogvideox 5b I2V
- Framepack I2V HY
- LTX video
- Pusa V1
- Wan 2.1 I2V 14b 720p
- Wan 2.1 Vace 14b
- Wan 2.1 I2V FusionX (LoRA)
Interpolate
Pose Guidance
Text to Video
- Cogvideox 5b
- Pusa V1
- Wan 2.1 T2V FantomX (LoRA)
- Wan 2.1 14b
- Wan 2.1 Vace 14b
- Wan 2.1 PhantomX (LoRA)
Local
Image to Video
Interpolate
Disclaimer
TL;DR: We don’t make the videos - the AI models do. We just make it easier to use them.
TioMagic Animation is an interface toolkit that sits between you and various video generation AI models. Think of it as a universal remote control for AI video models.
What we do:
✅ Provide a simple Python API to access multiple video models
✅ Handle the complexity of deploying models on Modal/cloud infrastructure
✅ Eliminate the need for expensive local GPUs
✅ Manage job queuing, status tracking, and result retrieval
✅ Abstract away provider-specific implementation details
What we don’t do:
❌ Create or train the AI models
❌ Modify or enhance model outputs
❌ Own any rights to the generated content
❌ Control what the models can or cannot generate
Important Notes:
- All generated content comes directly from the underlying models (e.g., Wan2.1-VACE, CogVideoX)
- You must comply with each model’s individual license terms
- Model availability and capabilities depend on the model creators, not us