Model Description
This is an MLP-Speculator draft model for Llama-3.1-8B-Instruct, trained from scratch using LK losses — training objectives that directly target acceptance rate rather than using KL divergence as a proxy.
Training Details
- Base model: meta-llama/Llama-3.1-8B-Instruct
- Draft architecture: MLP-Speculator
- Training data: Infinity-Instruct-0625 with Llama-3.1-8B generated responses
- Training objective: Hybrid LK loss with adaptive λ scheduling (η=3)
- Training: 10 epochs from random initialization
- Draft length: K = 6 speculative tokens
Performance
Average acceptance length (Ï„) measured across MT-bench, HumanEval, and GSM8K with K = 6:
| Configuration | Temperature = 0 | Temperature = 1 |
|---|---|---|
| MLP-Speculator + KL | 2.43 | 2.15 |
| MLP-Speculator + LK (ours) | 2.59 | 2.33 |
Measured at temperature = 1 with K = 6
Usage with vLLM
from vllm import LLM, SamplingParams
llm = LLM(
model="meta-llama/Llama-3.1-8B-Instruct",
speculative_config={
"method": "mlp_speculator",
"model": "nebius/MLP-Speculator-Llama-3.1-8B-Instruct",
"num_speculative_tokens": 6,
},
)
sampling_params = SamplingParams(temperature=0.7)
outputs = llm.generate(["Explain speculative decoding in simple terms."], sampling_params)
Note: The current vLLM implementation samples draft tokens greedily regardless of temperature settings, which can underestimate acceptance rates at temperature > 0. A community fix is under development (see vllm-project/vllm#20459). The acceptance metrics reported above were measured with proper rejection sampling.
License
This model was trained using outputs from meta-llama/Llama-3.1-8B-Instruct. Use of this model is additionally subject to the Llama 3.1 Community License Agreement.
Llama 3.1 is licensed under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved.
Citation
@misc{samarin2026lklosses,
title = {LK Losses: Direct Acceptance Rate Optimization for Speculative Decoding},
author = {Alexander Samarin and Sergei Krutikov and Anton Shevtsov and Sergei Skvortsov and Filipp Fisin and Alexander Golubev},
year = {2026},
eprint = {2602.23881},
archivePrefix = {arXiv},
primaryClass = {cs.LG},
url = {https://arxiv.org/abs/2602.23881}
}
- Downloads last month
- 24
Model tree for nebius/MLP-Speculator-Llama-3.1-8B-Instruct
Base model
meta-llama/Llama-3.1-8BDataset used to train nebius/MLP-Speculator-Llama-3.1-8B-Instruct
Collection including nebius/MLP-Speculator-Llama-3.1-8B-Instruct
Paper for nebius/MLP-Speculator-Llama-3.1-8B-Instruct
Evaluation results
- Acceptance Length on MT-Benchself-reported2.190
- Acceptance Length on GSM8Kself-reported2.180
- Acceptance Length on HumanEvalself-reported2.620