GGUF version of Wan-AI/Wan2.2-Animate-14B
The face adapter tensors are quantized to a precision slightly higher than QuantStack's, lower than Kijai's
- Downloads last month
- 260
Hardware compatibility
Log In
to add your hardware
2-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support