These models are converted from Qwen/Qwen3-30B-A3B-Thinking-2507

Please set up the generation config properly

  • temperature = 0.6
  • top_p = 0.95
  • top_k = 20
  • min_p = 0.0
  • output tokens: 32768

Best Practices: https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507#best-practices

Downloads last month
17
GGUF
Model size
31B params
Architecture
qwen3moe
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support