Model Card for Model ID

Q5K_M GGUF of https://huggingface.co/coder3101/Qwen3.5-27B-heretic

Fits a 24gb vram gpu, you can get the mmproj files for vision from Bartowski's model here https://huggingface.co/bartowski/Qwen_Qwen3.5-27B-GGUF/tree/main

Downloads last month
98
GGUF
Model size
27B params
Architecture
qwen35
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for juanml82/Qwen3.5-27B-heretic-gguf

Base model

Qwen/Qwen3.5-27B
Quantized
(16)
this model