BgGPT-Gemma-3-27B-IT-GGUF

GGUF quantized versions of BgGPT-Gemma-3-27B-IT for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools. BgGPT 3.0 is a series of Bulgarian-adapted LLMs based on Gemma 3, developed by INSAIT.

Blog post: BgGPT-3 Release

Key improvements over BgGPT 2.0

  1. Vision-language understanding — The models understand both text and images within the same context.
  2. Instruction-following — Trained on a broader range of tasks, multi-turn conversations, complex instructions, and system prompts.
  3. Longer context — Effective context of 131k tokens for longer conversations and complex instructions.
  4. Updated knowledge cut-off — Pretraining data up to May 2025, instruction fine-tuning up to October 2025.

Available quantizations

Filename Quant type Description
BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf Q4_K_M Good balance of quality and size — recommended
BgGPT-Gemma-3-27B-IT-Q5_K_M.gguf Q5_K_M High quality, slightly larger
BgGPT-Gemma-3-27B-IT-Q6_K.gguf Q6_K Very high quality, near lossless
BgGPT-Gemma-3-27B-IT-Q8_0.gguf Q8_0 Essentially lossless

Usage

llama.cpp

# Download a specific quantization
huggingface-cli download INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF \
    --include "BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf" \
    --local-dir .

# Run with llama-cli
llama-cli -m BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf \
    -p "Кога е основан Софийският университет?" \
    -n 512

Ollama

Create a Modelfile:

FROM ./BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf

Then:

ollama create bggpt-gemma3-27b -f Modelfile
ollama run bggpt-gemma3-27b

LM Studio

Search for BgGPT-Gemma-3-27B-IT-GGUF in the model browser, or download a GGUF file manually and load it.

Download

# Download all quantizations
huggingface-cli download INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF

# Download a specific file
huggingface-cli download INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF \
    --include "BgGPT-Gemma-3-27B-IT-Q4_K_M.gguf" \
    --local-dir .

License

BgGPT-Gemma-3-27B-IT-GGUF is distributed under the Gemma Terms of Use.

Downloads last month
1,768
GGUF
Model size
27B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF

Quantized
(4)
this model

Collection including INSAIT-Institute/BgGPT-Gemma-3-27B-IT-GGUF