Qwen3-0.6B SEO Fine-tuned (GGUF)

GGUF versions of Kelnux/Qwen3-0.6B-seo-finetuned for use with llama.cpp, Ollama, LM Studio, etc.

Available Files

File Quantization Size
Qwen3-0.6B-seo-finetuned-f16.gguf F16 (full precision) ~1.1 GB
Qwen3-0.6B-seo-finetuned-q8_0.gguf Q8_0 (8-bit quantized) ~610 MB

Training Details

  • Base model: Qwen/Qwen3-0.6B
  • Method: LoRA (r=16, alpha=32)
  • Dataset: metehan777/global-seo-knowledge (2,065 examples)
  • Final loss: 1.14
  • Token accuracy: 79.8%

Usage

With llama.cpp

llama-cli -m Qwen3-0.6B-seo-finetuned-q8_0.gguf -p "What is robots.txt in SEO?" -n 200

With Ollama

FROM ./Qwen3-0.6B-seo-finetuned-q8_0.gguf

With LM Studio

Download the GGUF file and load it directly in LM Studio.

Downloads last month
29
GGUF
Model size
0.6B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Kelnux/Qwen3-0.6B-seo-finetuned-GGUF

Finetuned
Qwen/Qwen3-0.6B
Quantized
(258)
this model

Dataset used to train Kelnux/Qwen3-0.6B-seo-finetuned-GGUF