--- license: apache-2.0 base_model: Qwen/Qwen3-0.6B datasets: - metehan777/global-seo-knowledge tags: - seo - fine-tuned - qwen3 - gguf language: - en pipeline_tag: text-generation library_name: llama.cpp --- # Qwen3-0.6B SEO Fine-tuned (GGUF) GGUF versions of [Kelnux/Qwen3-0.6B-seo-finetuned](https://huggingface.co/Kelnux/Qwen3-0.6B-seo-finetuned) for use with llama.cpp, Ollama, LM Studio, etc. ## Available Files | File | Quantization | Size | |------|-------------|------| | Qwen3-0.6B-seo-finetuned-f16.gguf | F16 (full precision) | ~1.1 GB | | Qwen3-0.6B-seo-finetuned-q8_0.gguf | Q8_0 (8-bit quantized) | ~610 MB | ## Training Details - **Base model**: Qwen/Qwen3-0.6B - **Method**: LoRA (r=16, alpha=32) - **Dataset**: metehan777/global-seo-knowledge (2,065 examples) - **Final loss**: 1.14 - **Token accuracy**: 79.8% ## Usage ### With llama.cpp ```bash llama-cli -m Qwen3-0.6B-seo-finetuned-q8_0.gguf -p "What is robots.txt in SEO?" -n 200 ``` ### With Ollama ``` FROM ./Qwen3-0.6B-seo-finetuned-q8_0.gguf ``` ### With LM Studio Download the GGUF file and load it directly in LM Studio.