Qwen3.5-4B-Claude-Opus-4.6-Distilled
This terribly named model was a quick finetune of Qwen3.5-4B on the nohurry/Opus-4.6-Reasoning-3000x-filtered dataset. It tends to have cleaner reasoning traces than the original Qwen3.5-4B, and is around as accurate. I haven't tested it, though. This model was finetuned and converted to GGUF format using Unsloth.
It's a bit inconsistent with reasoning. It's far less likely to enter endless loops, and is uses far fewer tokens than the original model. But it's still a 4B model that's been finetuned on one dataset, so it's not fantastic.
GGUF Quants
You can find a few GGUF quants of this model here.
Available Model files:
Qwen3.5-4B.Q5_K_M.ggufQwen3.5-4B.Q8_0.ggufQwen3.5-4B.Q4_K_M.ggufQwen3.5-4B.BF16-mmproj.ggufThis was trained 2x faster with Unsloth
- Downloads last month
- 416
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support