Large Language Models
Collection
16 items
•
Updated
This is a base (not instruction-tuned) large language model, continually pre-trained on Norwegian data starting from the English OLMo2-13B model.
Our training data mixture included HPLTv3 Bokmål and Nynorsk, FinePDF Bokmål and Nynorsk, MADLAD400 Norwegian, OLMo-Mix, Northern Sami dataset. The model was trained for 33 000 steps on around 300 billion tokens. Intermediate checkpoints are published here as branches.
Training was conducted as a part of the HPLT project.
This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government’s Horizon Europe funding guarantee [grant number 10052546]
Base model
allenai/OLMo-2-1124-13B