These are GGUF's of the model Huihui-MiroThinker-v1.0-8B-abliterated.

The quantizations were created using an imatrix merged from combined_en_small, calibration_datav3.txt and harmful.txt to leverage the abliterated nature of the model.

Downloads last month
69
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for noctrex/Huihui-Orchestrator-8B-abliterated-GGUF

Collection including noctrex/Huihui-Orchestrator-8B-abliterated-GGUF