Getting Started - Installation
- Open your Command Line Interface (CLI) - terminal on Mac or command prompt on Windows and move to the directory you wish to run this code in
- Download Python:
sudo apt install python3
- Install Pip (Python’s package manager):
sudo apt install python3-pip
- Create a virtual environment:
python3 -m venv venv
- Activate the virtual environment -(you should see
(venv)
on your command line after activation):- On MacOS/Linux:
source venv/bin/activate
- On Windows Command Prompt:
venv/Scripts/activate
- On MacOS/Linux:
- Run
pip install tiomagic
- Create a
.env
file at the root of your repository. Depending on what provider(s) you are using, copy/paste the appropriate access keys to the.env
file in the repository:- Veo 2:
GOOGLE_API_KEY
- Luma AI:
LUMAAI_API_KEY
- Modal:
MODAL_TOKEN_ID
andMODAL_TOKEN_SECRET
- Veo 2:
- Create a Hugging Face account and add the token to your Modal account (this is needed to access open source models)
- Copy/paste
modal_demo.py
from [Tio Magic Animation repository] (https://github.com/Tio-Magic-Company/tio-magic-animation) to run a Modal example of this toolkit - When you are done developing, run
deactivate
to exit the virtual environment
Getting Started - Quick Start
- At the top of your file, add
from tiomagic import tm
- Decide which provider, feature, and model you want to run. You can run
tm.list_implementations()
to get a comprehensive list of what this toolkit can run - Establish a configuration
tm.configure(provider="...")
. If you are running on a Modal provider, you can also specify gpu, timeout, and scale down window. - Define your required arguments and optional arguments. Refer to Parameter Documentation for a comprehensive list of parameters for each model. An example is `tm.image_to_video(model=”…”, required_args=…, **optional_args)
- Run your file!
Gradio GUI
A locally hosted Gradio GUI is provided for your convenience
- Clone the Tio Magic Animation Toolkit
- Follow steps 1-8 of Getting Started - Installation
- Run python3 gradio_wrapper.py
Running Modal Models
How it works
We use Modal because we anticipate that most people do not have the local computing power to boot up multiple video AI models. Modal solves this issue by running everything in their cloud. It does cost money, but Modal gives you a few credits upon registration which you can use to demo this toolkit.
Running a feature call (tm.text_to_video(model="...", required_params="...")
) makes an API call to Modal, which starts a modal “app”, loads a modal “container” that includes all of the files to make an inference call to the requested model, and runs a video generation with the given data.
You can visit your Modal dashboard to track the models you’ve launched and the requests you’ve made.
On the most basic plan, Modal only allows you to have 8 web endpoints deployed at a time. Every time you launch a new model, you are deploying 2 web endpoints (POST to run a generation, GET to check its status). If you deploy more than 4 models, you will run into errors running your new models. Please go onto your Modal dashboard, select the model you want to take down, and click stop app to deploy new models.
Development Setup
- Ensure you have followed the Getting Started - Installation instructions above to prepare your local environment
- Create a Modal account. You will be given $3.00 of free credits to use, which can run a few video generations, depending on the GPU you select and the complexity of the feature you request to run
- Create a Modal key and paste a
MODAL_TOKEN_ID
andMODAL_TOKEN_SECRET
in your.env
file. You can create one on your Modal dashboard or via Modal CLI
Example
Depending on the model and feature you use, ensure that your required and optional arguments match the name and type that the model is looking for. You can find these requirements under /core/schemas
from tiomagic import tm
def interpolate_example():
tm.configure(provider="modal")
required_args = {
'first_frame': 'URL or Local path to first frame image',
'last_frame': 'URL or Local path to last frame image',
'prompt': "Cartoon styled painter Bob ross painting a tree on his canvas, then turns towards the camera and smiles to the audience."
}
optional_args = {...}
tm.interpolate(model="wan2.1-vace-14b", required_args=required_args, **optional_args)
def check_status(job_id: str):
# updates generation_log.json
tm.check_generation_status(job_id)
if __name__ == "__main__":
load_dotenv()
interpolate_example() # run interpolate function once, output in generation_log.json
# job_id = "..."
# check_status(job_id) # run check status on job
# in directory of file, run python3 'file_name.py'
Once you successfully run a feature call on Modal, the job will show up in generation_log.json
. You can check the job’s status by running check_status
.
Occasionally check on the status of the job. Once it is completed on Modal, running check_status
will download the resulting video into output_videos
directory in your local repository. You can also find the output video on your Modal dashboard under “Volumes”.
For detailed API information, refer to API Reference.
Running Local Models
How it works
Currently, local implementations consist of calling APIs to close-sourced models. The benefit of implementing in Tio Magic Animation Toolkit is that you can simultaneously compare multiple video models against each other, making informed decisions on which models you want to use.
Running a feature call (tm.image_to_video(model="...", required_params="...")
) makes an API call to a closed-source model. These APIs do not have asynchronous capabilities, so you must wait for your image output. Luckily, it does not take too long (1-2 minutes) to generate.
Development Setup
- Ensure you have followed the Getting Started - Installation instructions above to prepare your local environment
- Get the API key to the closed source model you want to use and paste your key into
.env
Veo 2.0 Generate 001
- Ensure you have a
GOOGLE_API_KEY
and you have sufficient credits for the number of videos you want to create - Paste your key into
.env
from tiomagic import tm
def veo_image_to_video():
tm.configure(provider="local")
required_args = {
"prompt": "prompt for video",
"image": "https:// url or local path to image",
}
optional_args = {
...
}
tm.image_to_video(model="veo-2.0-generate-001", required_args=required_args, **optional_args)
return
NOTE: Veo2.0 does NOT have an asynchronous method of running. You must wait for the function to complete. Once the function finishes, your video will be under directory output_videos
Luma AI Ray 2
- Ensure you have a
LUMAAI_API_KEY
and you have sufficient credits for the number of videos you want to create - Paste your key into
.env
from tiomagic import tm
def luma_image_to_video():
tm.configure(provider="local")
required_args = {
"prompt": "prompt for video",
"image": "https:// url path to image",
}
# no optional args for luma
tm.image_to_video(model="luma-ray-2", required_args=required_args, **optional_args)
return
NOTES:
- Images all must be URLs, they cannot be local images
- Luma AI API does NOT have an asynchronous method of running. You must wait for the function to complete. Once the function finishes, your video will be under directory
output_videos