Humotica 32B (Qwen 2.5 32B)

Status: Awaiting v0.4.0 conversion

The source GGUF is available. v0.4.0 .oom conversion coming soon.

Files

  • humotica-32b-Q4_K_M.gguf (GGUF source)

pip install oomllama

Downloads last month
8
GGUF
Model size
33B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support