SmolLM CPT
Collection
Continued Pre-Training of SmolLM models on the Fineweb-2 portions of Scandinavian languages. • 28 items • Updated
Model created for the paper "Preferences for Idiomatic Language are Acquired Slowly --- and Forgotten Quickly: A Case Study on Swedish", TACL 2026.
@misc{kunz2026preferencesidiomaticlanguageacquired,
title={Preferences for Idiomatic Language are Acquired Slowly -- and Forgotten Quickly: A Case Study on Swedish},
author={Jenny Kunz},
year={2026},
eprint={2602.03484},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.03484},
}
This is a SmolLM2-135M model continually pre-trained on the Swedish portion of Fineweb-2.
This is a research model intended for studying pre-training dynamics and I do not recommend using it for any practical purposes. It is trained on a web corpus, and no alignment whatsoever has been performed, which means that the model will likely reflect its training data's biases and produce lots of hallucinations.
Base model
HuggingFaceTB/SmolLM2-135M