Sinhala TTS Piper Version 1
Sinhala neural text-to-speech voice in the Piper format (ONNX + JSON config). The checkpoint was trained by Intellisr on a custom dataset produced by Access to Success Organization, using recordings of Mr Ashoka Weerawardhana. The model targets Sinhala (si): phonemization follows espeak-ng voice si, audio is 16 kHz, single speaker.
Credits and data provenance
| Role | Organization / person |
|---|---|
| Training | Intellisr |
| Dataset | Access to Success Organization (custom Sinhala corpus) |
| Speaker voice | Mr Ashoka Weerawardhana |
Use of this model should respect the dataset license and any agreements between the organizations and the speaker. The technical runtime remains subject to Piper’s GPL-3.0 toolchain when combined with upstream Piper components.
Model files
| File | Description |
|---|---|
model.onnx |
Piper VITS-style ONNX graph for inference |
model.onnx.json |
Voice metadata, phoneme map, and default synthesis controls |
Bundle metadata reports piper_version 1.0.0, num_speakers 1, phoneme_type espeak.
Intended use
Local/offline Sinhala speech synthesis with the Piper engine (CLI, Python piper-tts, or integrations such as Home Assistant), subject to your rights to use the voice and dataset.
How to run (Piper)
Install Piper (piper-tts) and ensure espeak-ng is available on your system (same stack as training).
echo "ආයුබෝවන්" | piper \
--model model.onnx \
--config model.onnx.json \
--output_file sinhala.wav
Default inference settings from the config: noise_scale 0.667, length_scale 1.0, noise_w 0.8.
Training stack
Training used OHF-Voice/piper1-gpl. Environment included:
requirements.txtfrom the Piper training repocython>=0.29.0,piper-phonemize==1.1.0,librosa>=0.9.2,numpy>=1.19.0,onnxruntime>=1.11.0pytorch-lightning==1.7.0,torch==1.11.0,torchtext==0.12.0,torchvision==0.12.0torchaudio==0.11.0,torchmetrics==0.11.4(compatibility pin)- Monotonic align built via
build_monotonic_align.sh - System: espeak-ng (
apt-get install espeak-ngon Debian/Ubuntu-style images)
License
This voice bundle is distributed under the GNU General Public License v3.0 to align with piper1-gpl (GPL-3.0). Review COPYING in the upstream repository if you redistribute combined works. Dataset and voice rights may impose additional terms beyond this README; obtain clarity from Intellisr and Access to Success Organization where needed.
Citation
Credit the speaker, dataset provider, trainer, and Piper as appropriate.
@misc{sinhala_piper_v1_intellisr,
title = {Sinhala {TTS} {Piper} Version 1},
author = {Weerawardhana, Ashoka and {Access to Success Organization} and {Intellisr}},
year = {2025},
note = {Piper ONNX voice; trained by Intellisr on Access to Success Organization custom dataset; Piper engine: OHF-Voice/piper1-gpl}
}
@software{piper2024,
title = {Piper: Fast and local neural text-to-speech},
url = {https://github.com/OHF-Voice/piper1-gpl},
note = {Open Home Foundation}
}