Full suite of GGUF quantizations for the Qwen3.5 series.

Not including the 397B-A17B because it's too big for me to quant. See AesSedai/Qwen3.5-397B-A17B-GGUF for that one.

Downloads last month
14,164
GGUF
Model size
0.8B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ddh0/Qwen3.5-GGUF

Quantized
(87)
this model