BgGPT-Gemma-3
Collection
9 items • Updated • 7
GGUF quantized versions of BgGPT-Gemma-3-27B-IT for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools. BgGPT 3.0 is a series of Bulgarian-adapted LLMs based on Gemma 3, developed by INSAIT.
Blog post: BgGPT-3 Release
| Filename | Quant type | Description |
|---|---|---|
BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf |
Q4_K_M | Good balance of quality and size — recommended |
BgGPT-Gemma-3-27B-IT-Q5_K_M.gguf |
Q5_K_M | High quality, slightly larger |
BgGPT-Gemma-3-27B-IT-Q6_K.gguf |
Q6_K | Very high quality, near lossless |
BgGPT-Gemma-3-27B-IT-Q8_0.gguf |
Q8_0 | Essentially lossless |
# Download a specific quantization
huggingface-cli download INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF \
--include "BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf" \
--local-dir .
# Run with llama-cli
llama-cli -m BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf \
-p "Кога е основан Софийският университет?" \
-n 512
Create a Modelfile:
FROM ./BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf
Then:
ollama create bggpt-gemma3-27b -f Modelfile
ollama run bggpt-gemma3-27b
Search for BgGPT-Gemma-3-27B-IT-GGUF in the model browser, or download a GGUF file manually and load it.
# Download all quantizations
huggingface-cli download INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF
# Download a specific file
huggingface-cli download INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF \
--include "BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf" \
--local-dir .
BgGPT-Gemma-3-27B-IT-GGUF is distributed under the Gemma Terms of Use.
4-bit
5-bit
6-bit
8-bit
Base model
INSAIT-Institute/BgGPT-Gemma-3-27B-IT