Information
Model quantized using TensorRT-Model-Optimizer. Model quantized to NVFP4 format with KV cache quantized to FP8 format for compatibility.
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for rahtml/NVIDIA-Nemotron-Nano-12B-v2-NVFP4
Base model
nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base
Finetuned
nvidia/NVIDIA-Nemotron-Nano-12B-v2