Moshi Fine-tuned LoRA - Burak's Voice AI
This is a LoRA adapter fine-tuned on Moshi to answer questions about Burak's background and experience.
Model Details
- Base Model: kyutai/moshiko-pytorch-bf16
- Training Method: LoRA (Low-Rank Adaptation)
- LoRA Rank: 16
- Training Steps: 500
- Dataset: 223 Q&A conversations (~47 minutes)
Usage
# Install Moshi
pip install git+https://github.com/kyutai-labs/moshi.git#subdirectory=moshi
# Run with LoRA
python -m moshi.server \
--lora-weight=lora.safetensors \
--config-path=config.json
Then open http://localhost:8998 in your browser.
Training Details
- Framework: moshi-finetune
- GPUs: 2x Kaggle T4
- Training Time: ~15 hours
- Batch Size: 1 per GPU (effective: 2)
License
Apache 2.0 (same as base Moshi model)
- Downloads last month
- -
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for burrak/moshi-burak-fine-tune
Base model
kyutai/moshiko-pytorch-bf16