Qwen3-0.6B SEO uczciweseo.pl (GGUF) (Experimental)

Domain-specific fine-tuned version of Kelnux/Qwen3-0.6B-seo-bilingual for the uczciweseo.pl brand.

Note: This is an experimental model. The 0.6B parameter size limits domain knowledge injection - Polish text generation quality may degrade on brand-specific questions. English SEO knowledge is well preserved.

Training Pipeline

  1. Stage 1: Qwen3-0.6B + bilingual SEO LoRA (r=16, 7,880 examples)
  2. Stage 2: Domain adaptation LoRA (r=16, attention-only) on uczciweseo.pl brand knowledge

Stage 2 Details

  • Method: LoRA (r=16, alpha=32, q_proj+v_proj only) on merged bilingual model
  • Dataset: 2,312 examples (925 domain-specific 5x oversampled + 1,387 bilingual SEO control)
  • Domain data: 5x oversampled from 185 unique Q&A pairs
  • Mixing ratio: 60% bilingual / 40% domain
  • Epochs: 3
  • Learning rate: 1.5e-5 (cosine scheduler)
  • Gradient clipping: 0.3
  • Training loss: 1.55
  • Best eval loss: 1.36

Brand Knowledge Target

The training data covers uczciweseo.pl (EXELMEDIA sp. z o.o.):

  • Company values: no long-term contracts, full transparency, client ownership of assets
  • Services: SEO, Google Ads, Bing Ads, AI SEO, CRO, automation
  • Industry experience: construction, legal, industrial, automotive, furniture, e-commerce
  • Case studies: Elektrobim, Budmater, Higo, Sushi-sklep, ShopGracz, Tulisie, Meble Kukulka
  • Educational content: choosing SEO agency, fair contracts, common scams (#nabiciwseo)

Limitations

  • 0.6B model has limited capacity for domain-specific knowledge injection
  • Polish text quality may degrade on brand-specific questions
  • English SEO performance is well preserved
  • Recommended for research/experimentation, not production use

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("Kelnux/Qwen3-0.6B-seo-uczciweseo")
tokenizer = AutoTokenizer.from_pretrained("Kelnux/Qwen3-0.6B-seo-uczciweseo")

prompt = '<|im_start|>system\nJestes asystentem SEO firmy Uczciwe SEO.<|im_end|>\n<|im_start|>user\nCzym jest Uczciwe SEO?<|im_end|>\n<|im_start|>assistant\n'
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
27
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kelnux/Qwen3-0.6B-seo-uczciweseo-GGUF

Finetuned
Qwen/Qwen3-0.6B
Adapter
(2)
this model

Dataset used to train Kelnux/Qwen3-0.6B-seo-uczciweseo-GGUF