GoldenNet-Qwen2.5-3B-QLoRA-v1
Model Description
GoldenNet-Qwen2.5-3B-QLoRA-v1 is a QLoRA fine-tuned version of Qwen/Qwen2.5-3B-Instruct specialized for Iraqi Government Correspondence Processing.
This is the larger 3B model variant, offering improved performance over the 0.5B models while remaining efficient for edge deployment.
Tasks
- Document Classification - 8 categories (طلب، شكوى، تقرير، إعلام، استفسار، دعوة، تعميم، إحالة)
- Named Entity Recognition - Extracts persons, organizations, locations, dates, monetary values, laws
Model Comparison
| Model | Size | Method | Train Loss | Eval Loss | Training Time |
|---|---|---|---|---|---|
| 0.5B-QLoRA-v1 | 0.5B | QLoRA | 0.448 | 0.2998 | 49s |
| 0.5B-LoRA-v1 | 0.5B | LoRA | 0.496 | 0.3665 | 70s |
| 0.5B-Full-v1 | 0.5B | Full | 0.461 | 0.3636 | 121s |
| 3B-QLoRA-v1 | 3B | QLoRA | 0.396 | 0.2521 | 14min |
Training Details
| Parameter | Value |
|---|---|
| Base Model | Qwen/Qwen2.5-3B-Instruct |
| Fine-tuning Method | QLoRA (4-bit quantization + LoRA) |
| Quantization | 4-bit (bitsandbytes) |
| LoRA Rank | 64 |
| LoRA Alpha | 128 |
| LoRA Dropout | 0.05 |
| Learning Rate | 1e-4 |
| Epochs | 3 |
| Batch Size | 1 (effective: 16) |
| Max Sequence Length | 1024 |
| Precision | BF16 |
| Total Parameters | 3.2B |
| Trainable Parameters | 119.7M (3.7%) |
| Hardware | NVIDIA RTX 5070 (8GB VRAM) |
Loss Progression
- Epoch 0.5: 1.143
- Epoch 1.0: 0.462
- Epoch 1.5: 0.295
- Epoch 2.0: 0.244
- Epoch 2.5: 0.192
- Epoch 3.0: 0.181 (final)
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Alamori/GoldenNet-Qwen2.5-3B-QLoRA-v1",
device_map="auto",
torch_dtype="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Alamori/GoldenNet-Qwen2.5-3B-QLoRA-v1")
# Classification example
correspondence = """جمهورية العراق
وزارة التربية
العدد: 1234/ت/2025
إلى/ السيد مدير عام التعليم المحترم
م/ طلب تعيين معلمين
نرجو الموافقة على تعيين 50 معلماً.
مع التقدير"""
instruction = "صنّف المراسلة الحكومية التالية إلى إحدى الفئات: طلب، شكوى، تقرير، إعلام، استفسار، دعوة، تعميم، إحالة. أجب بصيغة JSON."
messages = [{"role": "user", "content": f"{instruction}\n\n{correspondence}"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.1)
print(tokenizer.decode(outputs[0][inputs['input_ids'].shape[1]:], skip_special_tokens=True))
When to Use This Model
- Use 3B-QLoRA-v1 for best accuracy when you have sufficient VRAM (~6GB for inference)
- Use 0.5B-QLoRA-v1 for fast inference on constrained hardware
- The 3B model shows ~16% improvement in eval loss over the 0.5B models
Related Models
- GoldenNet-Qwen2.5-0.5B-QLoRA-v1 - Smaller, faster variant
- GoldenNet-Qwen2.5-0.5B-LoRA-v1 - Standard LoRA
- GoldenNet-Qwen2.5-0.5B-Full-v1 - Full fine-tune
License
Apache 2.0
Developed by Golden Net AI
Empowering Iraqi Government Digital Transformation
Empowering Iraqi Government Digital Transformation
- Downloads last month
- 1
Model tree for Alamori/GoldenNet-Qwen2.5-3B-QLoRA-v1
Evaluation results
- Eval Lossself-reported0.252