Wan2.2-TI2V-5B stands out as a next-generation, open-source video generation model designed for high-definition, cinematic results. Building on robust research and large-scale data, it brings together advanced architecture and practical efficiency. This model can turn text or images into smooth, detailed 720P videos at 24 frames per second—right on a single powerful graphics card. With its unique Mixture-of-Experts design, Wan2.2 balances top-tier quality with fast performance, making it a game-changer for anyone looking to create visually stunning, customizable videos—whether for creative projects, research, or industry use. Wan2.2 lets you take full control over motion, aesthetics, and composition, making cinematic video creation more accessible than ever.
Model Download
Recommended GPU Configurations for Wan2.2-TI2V-5B
GPU Model | VRAM (GB) | Typical Use | Resolution / Batch | Notes |
---|
NVIDIA RTX 4090 | 24 | Consumer/Prosumer | 720P, 1 video | Minimum required for single-video 720P generation |
NVIDIA RTX A6000 | 48 | Workstation/Server | 720P, 1–2 videos | Faster generation; can increase batch size |
NVIDIA A100 80GB | 80 | Data Center/Cloud | 720P, multi-video | Remove --offload_model True and --t5_cpu for best speed |
NVIDIA H100 80GB | 80 | Data Center/Cloud | 720P, multi-video | Optimal speed; supports advanced options (FlashAttention3) |
Additional Notes
- 24GB VRAM (e.g., RTX 4090): Minimum for running 720P@24fps video generation. Use
--offload_model True --t5_cpu
for memory efficiency.
- 48GB VRAM (e.g., A6000): Smoother performance, can process slightly larger batches.
- 80GB VRAM (e.g., A100, H100): No need for model offloading or CPU processing. Maximum speed and parallelism; ideal for multiple generations or heavy workloads.
- Multi-GPU: For even larger batch sizes and faster throughput, distributed inference using FSDP + DeepSpeed is supported.
Resources
Link: https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B
Step-by-Step Process to Install & Run Wan2.2 TI2V 5B Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 2: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 3: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x H200 SXM GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 4: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 5: Choose an Image
In our previous blogs, we used pre-built images from the Templates tab when creating a Virtual Machine. However, for running Wan2.2 TI2V 5B, we need a more customized environment with full CUDA development capabilities. That’s why, in this case, we switched to the Custom Image tab and selected a specific Docker image that meets all runtime and compatibility requirements.
We chose the following image:
nvidia/cuda:12.1.1-devel-ubuntu22.04
This image is essential because it includes:
- Full CUDA toolkit (including
nvcc
)
- Proper support for building and running GPU-based applications like Wan2.2 TI2V 5B
- Compatibility with CUDA 12.1.1 required by certain model operations
Launch Mode
We selected:
Interactive shell server
This gives us SSH access and full control over terminal operations — perfect for installing dependencies, running benchmarks, and launching tools like Hunyuan3D World 1.0.
Docker Repository Authentication
We left all fields empty here.
Since the Docker image is publicly available on Docker Hub, no login credentials are required.
Identification
nvidia/cuda:12.1.1-devel-ubuntu22.04
CUDA and cuDNN images from gitlab.com/nvidia/cuda. Devel version contains full cuda toolkit with nvcc.
This setup ensures that the Wan2.2 TI2V 5B runs in a GPU-enabled environment with proper CUDA access and high compute performance.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 6: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 7: Connect to GPUs using SSH
NodeShift GPUs can be connected to and controlled through a terminal using the SSH key provided during GPU creation.
Once your GPU Node deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ button in the top right corner.
Now open your terminal and paste the proxy SSH IP or direct SSH IP.
Next, If you want to check the GPU details, run the command below:
nvidia-smi
Step 8: Check the Available Python version and Install the new version
Run the following commands to check the available Python version.
If you check the version of the python, system has Python 3.8.1 available by default. To install a higher version of Python, you’ll need to use the deadsnakes
PPA.
Run the following commands to add the deadsnakes
PPA:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository -y ppa:deadsnakes/ppa
sudo apt update
Step 9: Install Python 3.11
Now, run the following command to install Python 3.11 or another desired version:
sudo apt install -y python3.11 python3.11-venv python3.11-dev
Step 10: Update the Default Python3
Version
Now, run the following command to link the new Python version as the default python3
:
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.11 2
sudo update-alternatives --config python3
Then, run the following command to verify that the new Python version is active:
python3 --version
Step 11: Install and Update Pip
Run the following command to install and update the pip:
curl -O https://bootstrap.pypa.io/get-pip.py
python3.11 get-pip.py
Then, run the following command to check the version of pip:
pip --version
Step 11: Created and activated Python 3.11 virtual environment
Run the following commands to created and activated Python 3.11 virtual environment:
apt update && apt install -y python3.11-venv git wget
python3.11 -m venv wan
source wan/bin/activate
Step 12: Clone the Wan2.2 Repository
Run the following command to clone the wan2.2 repository:
git clone https://github.com/Wan-Video/Wan2.2.git
cd Wan2.2
Step 13: Install Python Dependencies
Run the following command to install python dependencies:
pip install torch --extra-index-url https://download.pytorch.org/whl/cu121
Torch >=2.4.0 is required!
If you have an NVIDIA GPU, install the correct CUDA version of torch (recommended).
Example below is for CUDA 12.x and pip.
Step 14: Install Requirements File
Run the following command to install requirements.txt file:
pip install -r requirements.txt
Step 15: Download the Model Weights
You have two options: HuggingFace CLI or ModelScope.
Here’s the HuggingFace way (most common):
pip install "huggingface_hub[cli]"
huggingface-cli download Wan-AI/Wan2.2-TI2V-5B --local-dir ./Wan2.2-TI2V-5B
You will need a free HuggingFace account and to be logged in via huggingface-cli login
.
Step 16: Run a Sample Text-to-Video Generation
Here’s a sample command for 720P (1280×704) text-to-video on a 24GB+ GPU (with offloading to fit in memory):
python3 generate.py \
--task ti2v-5B \
--size 1280*704 \
--ckpt_dir ./Wan2.2-TI2V-5B \
--offload_model True \
--convert_model_dtype \
--prompt "Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage"
Step 17: Locate and Play the Generated Video
After the model finishes running, you will see a message similar to:
INFO: Saving generated video to ti2v-5B_1280*704_1_Two_anthropomorphic_cats_in_comfy_boxing_gear_and__20250801_015018.mp4
INFO: Finished.
Find the generated video file in your Wan2.2
directory.
- The filename will be in this format:
ti2v-5B_1280*704_1_<your_prompt_snippet>_<timestamp>.mp4
Example:
ti2v-5B_1280*704_1_Two_anthropomorphic_cats_in_comfy_boxing_gear_and__20250801_015018.mp4
You can now play the generated video using any media player or directly in VS Code (as shown in the screenshot).
Step 18: Run a Sample Image-to-Video Generation
You can generate a video conditioned on an image by adding --image
:
python generate.py \
--task ti2v-5B \
--size 1280*704 \
--ckpt_dir ./Wan2.2-TI2V-5B \
--offload_model True \
--convert_model_dtype \
--image examples/i2v_input.JPG \
--prompt "Summer beach vacation style, a white cat wearing sunglasses sits on a surfboard. The fluffy-furred feline gazes directly at the camera with a relaxed expression."
Step 19: Install Gradio
Run the following command to install gradio:
pip install gradio
Step 20: Write Gradio Script
Make a file name gradio_app.py and add the following code:
We will write the Gradio script so that it always finds the latest generated video and copies it to output.mp4
for easy access in the UI.
import gradio as gr
import subprocess
import os
def generate_video(prompt, image=None):
size = "1280*704"
ckpt_dir = "./Wan2.2-TI2V-5B"
base_cmd = [
"python", "generate.py",
"--task", "ti2v-5B",
"--size", size,
"--ckpt_dir", ckpt_dir,
"--offload_model", "True",
"--convert_model_dtype",
"--t5_cpu",
"--prompt", prompt
]
image_path = None
if image is not None:
image_path = "input_tmp.jpg"
image.save(image_path)
base_cmd += ["--image", image_path]
# Remove any previous output video
out_path = "output.mp4"
if os.path.exists(out_path):
os.remove(out_path)
# Run the command
try:
result = subprocess.run(base_cmd, check=True, capture_output=True, text=True)
print(result.stdout)
except subprocess.CalledProcessError as e:
return f"Error:\n{e.stderr}", None
# Clean up temp image file if created
if image_path and os.path.exists(image_path):
os.remove(image_path)
# Check if video was generated
if os.path.exists(out_path):
return f"Video generated!", out_path
else:
return "Generation failed.", None
# Gradio Interface
demo = gr.Interface(
fn=generate_video,
inputs=[
gr.Textbox(label="Prompt", placeholder="Describe your video (in English or Chinese)"),
gr.Image(label="Input Image (optional)", type="pil"),
],
outputs=[
gr.Textbox(label="Status"),
gr.Video(label="Generated Video (MP4)", format="mp4")
],
title="Wan2.2 TI2V-5B Video Generator",
description="Generate 720p videos from text or image+text using Wan2.2's TI2V-5B model."
)
if __name__ == "__main__":
demo.launch(server_name="0.0.0.0", server_port=7860, share=True)
Step 21: Open Your Gradio App in the Browser
After launching the Gradio script with:
python3 gradio_hunyuanworld.py
you will see a message like:
* Running on local URL: http://127.0.0.1:7860
Step 22: Set Up SSH Port Forwarding
To access your remote Gradio app in your local browser, use SSH port forwarding.
You already did this with the following command:
ssh -L 7860:localhost:7860 -p 17864 root@149.7.4.152
What this does:
- Forwards port
7860
from your remote VM to port 7860
on your local machine.
- You can now open http://localhost:7860 in your local browser and see the Gradio interface running on your server!
Recap of the flow:
- SSH into your remote machine with port forwarding enabled (as above).
- Run your Gradio script on the VM (e.g.,
python3 gradio_app.py
).
- Open http://localhost:7860 on your local machine.
- You now have seamless access to the Gradio app UI, even though it’s running on the remote VM!
Step 23: Start Creating AI Videos with the Web UI
Once the Gradio interface is open in your browser, you’re ready to create your first AI-generated video!
- Enter your prompt describing the video you want to generate (e.g., “A futuristic city skyline at sunset, with flying cars”).
- (Optional) Upload an input image if you want to use image-to-video mode, or leave it blank for pure text-to-video.
- Click the orange “Submit” button to begin video generation.
- Wait for processing—progress will be shown in the “Status” box, and your generated video will appear in the “Generated Video” panel when complete.
You can now experiment with creative prompts and images to generate unique 720p AI videos, all through your browser!
Generated Videos and Outputs
Conclusion
With Wan2.2-TI2V-5B, high-quality video generation is finally accessible to everyone—right from your own cloud GPU or workstation. Whether you’re an artist, developer, researcher, or just curious about what’s possible, this guide helps you go from zero to stunning 720P AI videos with nothing but a prompt (and maybe an image). The process is streamlined, repeatable, and puts the power of next-generation video diffusion models directly at your fingertips.
Now, it’s your turn—experiment with new ideas, try wild prompts, bring your visions to life, and share your creations with the world. The era of open-source cinematic AI video is here, and you’re at the frontier.