Sarvam-30B AWQ→FP8 (Mixed-Precision)
Hybrid mixed-precision quantization of sarvamai/sarvam-30b for the Resilient AI Challenge.
Method: AWQ first, then FP8
Two-stage sequential compression:
- AWQ W4A16 on MLP/expert layers (4-bit, activation-aware scaling)
- FP8 Dynamic on remaining BF16 layers (attention + layer 0)
This produces a hybrid model where each component uses the optimal precision:
| Component | Precision | Why |
|---|---|---|
| MLP/Experts (layers 1-18) | INT4 (AWQ) | 128 MoE experts tolerate 4-bit thanks to redundancy |
| Attention (layers 0-18) | FP8 | Sensitive with only 4 KV heads, FP8 preserves quality |
| Layer 0 MLP (dense) | FP8 | Dense layer (not MoE), more sensitive than experts |
| lm_head | BF16 | Output layer, always kept at full precision |
Quantization Details
| Specification | Value |
|---|---|
| Method | AWQ W4A16 → FP8 Dynamic (sequential) |
| AWQ Tool | llm-compressor |
| AWQ Recipe | QuantTrio recipe (ignore attention + layer 0) |
| AWQ Calibration | sarvamai/indivibe + cais/mmlu |
| FP8 Scheme | FP8_DYNAMIC (no calibration needed) |
| Model Size | ~24 GB (vs 60 GB baseline, 26 GB AWQ-only, 37 GB FP8-only) |
| Hardware | Quantized on NVIDIA H100 80GB |
Usage with vLLM
vllm serve AMbaye018/sarvam-30b-AWQ-then-FP8 \
--trust-remote-code \
--tensor-parallel-size 1 \
--gpu-memory-utilization 0.90 \
--max-model-len 32768 \
--host 0.0.0.0 \
--port 8000
vLLM Config (for challenge submission)
model: AMbaye018/sarvam-30b-AWQ-then-FP8
served_model_name: sarvam-30b-awq-fp8
trust_remote_code: true
tensor_parallel_size: 1
gpu_memory_utilization: 0.90
max_model_len: 32768
max_num_seqs: 64
host: 0.0.0.0
port: 8000
References
- sarvamai/sarvam-30b — Base model
- QuantTrio/sarvam-30b-AWQ — AWQ recipe reference
- llm-compressor — Quantization tool
License
Apache License 2.0 (same as base model)
- Downloads last month
- 7
Model tree for AMbaye018/sarvam-30b-AWQ-then-FP8
Base model
sarvamai/sarvam-30b