Qwen__Qwen3-30B-A3B-Thinking-2507_RTN_w4g128
This is a 4-bit RTN (Round-To-Nearest) quantized version of Qwen/Qwen3-30B-A3B-Thinking-2507.
Quantization Details
- Method: RTN (Round-To-Nearest)
- Bits: 4-bit
- Group Size: 128
- Base Model: Qwen/Qwen3-30B-A3B-Thinking-2507
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "quantpa/Qwen__Qwen3-30B-A3B-Thinking-2507_RTN_w4g128"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Use the model for inference
Model Details
- Quantization: RTN 4-bit
- Original Model: Qwen/Qwen3-30B-A3B-Thinking-2507
- Quantized by: quantpa
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for quantpa/Qwen__Qwen3-30B-A3B-Thinking-2507_RTN_w4g128
Base model
Qwen/Qwen3-30B-A3B-Thinking-2507