--- task_categories: - audio-text-to-text --- # QualiSpeech: A Speech Quality Assessment Dataset with Natural Language Reasoning and Descriptions

* πŸ“„ Paper: [https://arxiv.org/abs/2503.20290](https://arxiv.org/abs/2503.20290) **QualiSpeech** is a comprehensive English-language speech quality assessment dataset designed to go beyond traditional numerical scores. It introduces detailed natural language comments with reasoning, capturing low-level speech perception aspects such as noise, distortion, continuity, speed, naturalness, listening effort, and overall quality. ## 🌟 Key Features * **11 annotated aspects** including 7 numerical scores and 4 specific descriptions (e.g., noise type and time, distortion type and time, unnatural pauses, vocal characteristics). * **Natural language descriptions** capturing contextual and logical insights for overall quality reasoning. * **Over 15,000 speech samples** from diverse sources including synthetic (e.g., BVCC, recent TTS models) and real speech (e.g., NISQA, GigaSpeech). * **QualiSpeech Benchmark** for evaluating low-level speech perception in auditory large language models (LLMs). ## πŸ“ Dataset Structure Each sample in the dataset contains: ```yaml - audio_path: path/to/audio.wav - scores: - noise: 4 - distortion: 3 - speed: 3 - continuity: 5 - naturalness: 3 - listening_effort: 5 - overall: 3 - descriptions: - noise_description: "Outdoor music noise, 0–3s" - distortion_description: "None" - unnatural_pause: "None" - feeling_of_voice: "A young man’s gentle voice with a peaceful tone" - natural_language_description: | The speech sample presents a gentle and peaceful tone... ``` ## πŸ”½ Download Instructions Due to licensing restrictions on the Blizzard Challenge data (the data may NOT be redistributed), please first download the required BVCC data following the provided scripts: ```bash bash download_bvcc.sh # or download manually bash merge_data.sh # to construct the final QualiSpeech dataset ``` ## πŸ“„ References of resources & models used #### Resources: - **BVCC**: [Erica Cooper and Junichi Yamagishi. 2021. How do voices from past speech synthesis challenges compare today? In Proc. SSW, Budapest.](https://zenodo.org/records/6572573) - **NISQA**: [Gabriel Mittag, Babak Naderi, Assmaa Chehadi, and Sebastian MΓΆller. 2021. NISQA: A deep CNN-selfattention model for multidimensional speech quality prediction with crowdsourced datasets. In Proc. Interspeech, Brno.](https://github.com/gabrielmittag/NISQA/wiki/NISQA-Corpus) - **GigaSpeech**: [Guoguo Chen, Shuzhou Chai, Guanbo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, et al. 2021. GigaSpeech: An evolving, multi-domain ASR corpus with 10,000 hours of transcribed audio. In Proc. Interspeech, Florence.](https://github.com/SpeechColab/GigaSpeech) #### Acoustic models: - **ChatTTS**: [https://github.com/2noise/ChatTTS](https://github.com/2noise/ChatTTS) - **XTTS v2**: [https://github.com/coqui-ai/TTS](https://github.com/coqui-ai/TTS) - **CosyVoice**: [Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, et al. 2024. Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. arXiv preprint arXiv:2407.05407.](https://github.com/FunAudioLLM/CosyVoice) - **F5-TTS**: [Yushen Chen, Zhikang Niu, Ziyang Ma, Keqi Deng, Chunhui Wang, Jian Zhao, Kai Yu, and Xie Chen. 2024. F5-tts: A fairytaler that fakes fluent and faithful speech with flow matching. arXiv preprint arXiv:2410.06885.](https://github.com/SWivid/F5-TTS) - **E2-TTS**: [Sefik Emre Eskimez, Xiaofei Wang, Manthan Thakker, Canrun Li, Chung-Hsien Tsai, Zhen Xiao, Hemin Yang, Zirun Zhu, Min Tang, Xu Tan, et al. 2024. E2 tts: Embarrassingly easy fully non-autoregressive zero-shot tts. In Proc. SLT, Macao. (implemented by F5-TTS)](https://github.com/SWivid/F5-TTS) - **OpenVoice V1/V2**: [Zengyi Qin, Wenliang Zhao, Xumin Yu, and Xin Sun. 2023. Openvoice: Versatile instant voice cloning. arXiv preprint arXiv:2312.01479.](https://github.com/myshell-ai/OpenVoice) - **Parler-TTS Mini/Large**: [https://github.com/huggingface/parler-tts](https://github.com/huggingface/parler-tts) - **VoiceCraft-830M**: [Puyuan Peng, Po-Yao Huang, Shang-Wen Li, Abdelrahman Mohamed, and David Harwath. 2024. VoiceCraft: Zero-shot speech editing and text-to-speech in the wild. In Proc. ACL, Bangkok.](https://github.com/jasonppy/VoiceCraft) #### Noise: - **DNS Challenge**: [Harishchandra Dubey, Ashkan Aazami, Vishak Gopal, Babak Naderi, Sebastian Braun, Ross Cutler, Hannes Gamper, Mehrsa Golestaneh, and Robert Aichner. 2023. ICASSP 2023 Deep Noise Suppression Challenge. In Proc. ICASSP, Rhodes Island.](https://github.com/microsoft/DNS-Challenge) #### Synthesized text: - **SOMOS**: [Georgia Maniati, Alexandra Vioni, Nikolaos Ellinas, Karolos Nikitaras, Konstantinos Klapsas, June Sig Sung, Gunu Jho, Aimilios Chalamandaris, and Pirros Tsiakoulis. 2022. SOMOS: The Samsung open MOS dataset for the evaluation of neural text-to-speech synthesis. In Proc. Interspeech, Incheon.](https://zenodo.org/records/7378801) #### Speaker for zero-shot TTS: - **Libriheavy**: [Wei Kang, Xiaoyu Yang, Zengwei Yao, Fangjun Kuang, Yifan Yang, Liyong Guo, Long Lin, and Daniel Povey. 2024. Libriheavy: A 50,000 hours ASR corpus with punctuation casing and context. In Proc. ICASSP, Seoul.](https://github.com/k2-fsa/libriheavy) ## πŸ“„ License: [Creative Commons AttributionNonCommercial-ShareAlike 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) ## πŸ“š Citation If you use QualiSpeech in your work, please cite: ``` @inproceedings{wang2025qualispeech, title={QualiSpeech: A Speech Quality Assessment Dataset with Natural Language Reasoning and Descriptions}, author={Siyin Wang and Wenyi Yu and Xianzhao Chen and Xiaohai Tian and Jun Zhang and Lu Lu and Yu Tsao and Junichi Yamagishi and Yuxuan Wang and Chao Zhang}, year={2025}, booktitle={Proc. ACL}, address={Vienna} } ```