AgriSmart-TinyLlama-LoRA
A LoRA adapter for TinyLlama-1.1B-Chat, fine-tuned on agricultural Q&A data to serve as a specialized farming assistant.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
# Load base model with 4-bit quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
)
base_model = AutoModelForCausalLM.from_pretrained(
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("kellenmurerwa/AgriSmart-TinyLlama-LoRA")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "kellenmurerwa/AgriSmart-TinyLlama-LoRA")
# Ask a question
question = "What is the best fertilizer for rice crops?"
prompt = f"### Instruction:
{question}
### Response:
"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7, top_p=0.9)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response.split("### Response:
")[-1].strip())
Training Details
| Parameter | Value |
|---|---|
| Base Model | TinyLlama-1.1B-Chat-v1.0 |
| Dataset | KisanVaani agriculture-qa-english-only (22,615 samples) |
| LoRA Rank | 16 |
| LoRA Alpha | 32 |
| Target Modules | q_proj, v_proj |
| Trainable Parameters | ~2.25M (0.2% of total) |
| Learning Rate | 5e-5 |
| Epochs | 2 |
| Quantization | 4-bit NF4 |
Evaluation
| Metric | Score |
|---|---|
| BLEU | 0.1810 |
| ROUGE-1 | 0.5129 |
| ROUGE-2 | 0.3268 |
| ROUGE-L | 0.4826 |
| Perplexity | 2.2583 |
Links
- GitHub: AgriSmart-Assistant
- Colab Notebook:
- Downloads last month
- 38
Model tree for kellenmurerwa/AgriSmart-TinyLlama-LoRA
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0