DNA-2.1-14B-FP8
Overview
This is an FP8-quantized version of dnotitia/DNA-2.1-14B, optimized for efficient inference by DLM (Data Science Lab., Ltd.).
FP8 (8-bit floating point) quantization with static per-tensor scaling reduces model size by approximately 35% while maintaining near-original accuracy. Fully compatible with vLLM for high-throughput production serving.
DNA-2.1 is specialized for Korean-native chain-of-thought reasoning, trained with Oracle-Guided Dr. GRPO to think natively in Korean.
Model Details
| Attribute | Value |
|---|---|
| Base Model | dnotitia/DNA-2.1-14B |
| Architecture | Qwen3ForCausalLM |
| Parameters | ~14B |
| Quantization | FP8 W8A8 (Static Per-Tensor) |
| Quantization Tool | llm-compressor |
| Calibration Data | HuggingFaceH4/ultrachat_200k (512 samples) |
| Model Size | ~19 GB (vs ~30 GB in BF16) |
| Context Length | 40K native |
| Vocabulary | 151,936 tokens |
| License | Apache 2.0 |
| Quantized By | DLM (Data Science Lab., Ltd.) |
Quantization Details
- Method: Static FP8 quantization via
llm-compressoroneshot - Precision: FP8_E4M3 for weights, FP8_E4M3 for input activations
- Strategy: Per-tensor symmetric scaling with MinMax observer
- Calibration: 512 samples from
HuggingFaceH4/ultrachat_200k(train_sft split), max sequence length 2048 - Format: compressed-tensors (safetensors)
- Preserved layers:
lm_headkept in full precision (BF16) - Targets: All
Linearlayers (except lm_head)
Usage
vLLM (Recommended)
vllm serve dataslab/DNA-2.1-14B-FP8 \
--dtype auto \
--max-model-len 40960 \
--enable-reasoning \
--reasoning-parser deepseek_r1
Python (vLLM)
from vllm import LLM, SamplingParams
llm = LLM(model="dataslab/DNA-2.1-14B-FP8")
sampling_params = SamplingParams(
temperature=0.6, top_p=0.95, top_k=20, max_tokens=4096
)
messages = [
{"role": "user", "content": "한국의 경제 발전 과정에 대해 설명해주세요."}
]
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
Python (Transformers)
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("dataslab/DNA-2.1-14B-FP8")
model = AutoModelForCausalLM.from_pretrained(
"dataslab/DNA-2.1-14B-FP8",
device_map="auto",
)
messages = [
{"role": "user", "content": "복잡한 문제를 단계별로 분석해줘."}
]
inputs = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=4096,
temperature=0.6,
top_p=0.95,
top_k=20,
do_sample=True,
)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
Dynamic Thinking Mode
DNA-2.1 supports dynamic thinking with Korean-native chain-of-thought:
- Thinking mode: Add
/thinkto enable Korean step-by-step reasoning (temperature=0.6) - Non-thinking mode: Add
/no_thinkfor concise, direct responses (temperature=0.7) - Auto mode: No tag — model decides based on question complexity
Base Model
DNA 2.1 is developed by Dnotitia Inc. and features:
- Qwen3-14B foundation
- Korean-native CoT reasoning via Oracle-Guided Dr. GRPO training
- 40K context length
- Trained to think natively in Korean within
<think>blocks
For more details, see the arXiv paper (2508.10355).
Quantized Models (DNA Series)
| Model | Base | Method | Size |
|---|---|---|---|
| dataslab/DNA-2.0-14B-FP8 | DNA-2.0 | FP8 W8A8 | ~19 GB |
| dataslab/DNA-2.0-14B-GPTQ | DNA-2.0 | GPTQ W4A16 | ~9 GB |
| dataslab/DNA-2.1-14B-FP8 | DNA-2.1 | FP8 W8A8 | ~19 GB |
| dataslab/DNA-2.1-14B-GPTQ | DNA-2.1 | GPTQ W4A16 | ~9 GB |
License
Apache 2.0 — Same as the base model.
Quantized and released by DLM (Data Science Lab., Ltd.) — HuggingFace
- Downloads last month
- -
Model tree for dataslab/DNA-2.1-14B-FP8
Base model
Qwen/Qwen3-14B-Base