Instructions to use Wan-AI/Wan2.2-Animate-14B-Diffusers with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use Wan-AI/Wan2.2-Animate-14B-Diffusers with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("Wan-AI/Wan2.2-Animate-14B-Diffusers", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
How to run WanAnimatePipeline with GGUF quantization?
Hello,
I tried to run WanAnimatePipeline and WanAnimateTransformer3DModel with GGUF Quantized models, but getting operation unsupported for WanAnimateTransformer3DModel. Any pointers on how to run Wan Animate with less GPU memory? currently running out of GPU memory when trying to run the 1280 x 780 example
I have been looking to create a state_dict conversion function, so far so good until I get to the motion_encoder weights, I am stuck at the mapping, can't make sense which is which. Would be great if someone can shed a light on how transform weights maps to diffuser weights for the motion_encoder piece. In the Wan Animate paper it is called "Body Adapter" if I am not mistaken, but not much mentioned about it.
Finally, I was able to create a conversion function, will publish on diffusers git soon
https://github.com/huggingface/diffusers/pull/12691 , although I can load the GGUF files now, I am getting errors related to dtype mismatch when a matrix operation is performed 😥
Was able to fix the issue and added fix to same pull request