The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: HfHubHTTPError
Message: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/4a/a2/4aa2d2f867d3fefceddd6fc22193d9c72874a1453d3ef66755d1bac41904faad/c2c19f107e4225ae22017ab2195afde12865ef5bd070a04d7ba36d63d9d0378e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260128T110625Z&X-Amz-Expires=3600&X-Amz-Signature=83a790268cae399b6b68db484069830049c3edfc617a391e9a1aa27c2727abb6&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27train.csv%3B%20filename%3D%22train.csv%22%3B&response-content-type=text%2Fcsv&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/4a/a2/4aa2d2f867d3fefceddd6fc22193d9c72874a1453d3ef66755d1bac41904faad/c2c19f107e4225ae22017ab2195afde12865ef5bd070a04d7ba36d63d9d0378e</Key><RequestId>85Z3W50VW036ADJ2</RequestId><HostId>BWkh6bFXotgD2+B+BfsG7oZUnlfLd2CMuVbmbwbJFJuNmTPMwjhEcDEYs4t52qtaggB5wPa/hgQ=</HostId></Error>
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 409, in hf_raise_for_status
response.raise_for_status()
File "/usr/local/lib/python3.12/site-packages/requests/models.py", line 1026, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/4a/a2/4aa2d2f867d3fefceddd6fc22193d9c72874a1453d3ef66755d1bac41904faad/c2c19f107e4225ae22017ab2195afde12865ef5bd070a04d7ba36d63d9d0378e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260128T110625Z&X-Amz-Expires=3600&X-Amz-Signature=83a790268cae399b6b68db484069830049c3edfc617a391e9a1aa27c2727abb6&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27train.csv%3B%20filename%3D%22train.csv%22%3B&response-content-type=text%2Fcsv&x-id=GetObject
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 196, in _generate_tables
csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/streaming.py", line 73, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1250, in xpandas_read_csv
return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 620, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
self._engine = self._make_engine(f, self.engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
return mapping[engine](f, **self.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
self._reader = parsers.TextReader(src, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 844, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 1015, in read
return super().read(length)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 1846, in read
out = self.cache._fetch(self.loc, self.loc + length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fsspec/caching.py", line 189, in _fetch
self.cache = self.fetcher(start, end) # new block replaces old
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 976, in _fetch_range
hf_raise_for_status(r)
File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://hf-hub-lfs-us-east-1.s3.us-east-1.amazonaws.com/repos/4a/a2/4aa2d2f867d3fefceddd6fc22193d9c72874a1453d3ef66755d1bac41904faad/c2c19f107e4225ae22017ab2195afde12865ef5bd070a04d7ba36d63d9d0378e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA2JU7TKAQLC2QXPN7%2F20260128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20260128T110625Z&X-Amz-Expires=3600&X-Amz-Signature=83a790268cae399b6b68db484069830049c3edfc617a391e9a1aa27c2727abb6&X-Amz-SignedHeaders=host&response-content-disposition=inline%3B%20filename%2A%3DUTF-8%27%27train.csv%3B%20filename%3D%22train.csv%22%3B&response-content-type=text%2Fcsv&x-id=GetObject
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>repos/4a/a2/4aa2d2f867d3fefceddd6fc22193d9c72874a1453d3ef66755d1bac41904faad/c2c19f107e4225ae22017ab2195afde12865ef5bd070a04d7ba36d63d9d0378e</Key><RequestId>85Z3W50VW036ADJ2</RequestId><HostId>BWkh6bFXotgD2+B+BfsG7oZUnlfLd2CMuVbmbwbJFJuNmTPMwjhEcDEYs4t52qtaggB5wPa/hgQ=</HostId></Error>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
QualiSpeech: A Speech Quality Assessment Dataset with Natural Language Reasoning and Descriptions
- π Paper: https://arxiv.org/abs/2503.20290
QualiSpeech is a comprehensive English-language speech quality assessment dataset designed to go beyond traditional numerical scores. It introduces detailed natural language comments with reasoning, capturing low-level speech perception aspects such as noise, distortion, continuity, speed, naturalness, listening effort, and overall quality.
π Key Features
- 11 annotated aspects including 7 numerical scores and 4 specific descriptions (e.g., noise type and time, distortion type and time, unnatural pauses, vocal characteristics).
- Natural language descriptions capturing contextual and logical insights for overall quality reasoning.
- Over 15,000 speech samples from diverse sources including synthetic (e.g., BVCC, recent TTS models) and real speech (e.g., NISQA, GigaSpeech).
- QualiSpeech Benchmark for evaluating low-level speech perception in auditory large language models (LLMs).
π Dataset Structure
Each sample in the dataset contains:
- audio_path: path/to/audio.wav
- scores:
- noise: 4
- distortion: 3
- speed: 3
- continuity: 5
- naturalness: 3
- listening_effort: 5
- overall: 3
- descriptions:
- noise_description: "Outdoor music noise, 0β3s"
- distortion_description: "None"
- unnatural_pause: "None"
- feeling_of_voice: "A young manβs gentle voice with a peaceful tone"
- natural_language_description: |
The speech sample presents a gentle and peaceful tone...
π½ Download Instructions
Due to licensing restrictions on the Blizzard Challenge data (the data may NOT be redistributed), please first download the required BVCC data following the provided scripts:
bash download_bvcc.sh # or download manually
bash merge_data.sh # to construct the final QualiSpeech dataset
π References of resources & models used
Resources:
- BVCC: Erica Cooper and Junichi Yamagishi. 2021. How do voices from past speech synthesis challenges compare today? In Proc. SSW, Budapest.
- NISQA: Gabriel Mittag, Babak Naderi, Assmaa Chehadi, and Sebastian MΓΆller. 2021. NISQA: A deep CNN-selfattention model for multidimensional speech quality prediction with crowdsourced datasets. In Proc. Interspeech, Brno.
- GigaSpeech: Guoguo Chen, Shuzhou Chai, Guanbo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, et al. 2021. GigaSpeech: An evolving, multi-domain ASR corpus with 10,000 hours of transcribed audio. In Proc. Interspeech, Florence.
Acoustic models:
- ChatTTS: https://github.com/2noise/ChatTTS
- XTTS v2: https://github.com/coqui-ai/TTS
- CosyVoice: Zhihao Du, Qian Chen, Shiliang Zhang, Kai Hu, Heng Lu, Yexin Yang, Hangrui Hu, Siqi Zheng, Yue Gu, Ziyang Ma, et al. 2024. Cosyvoice: A scalable multilingual zero-shot text-to-speech synthesizer based on supervised semantic tokens. arXiv preprint arXiv:2407.05407.
- F5-TTS: Yushen Chen, Zhikang Niu, Ziyang Ma, Keqi Deng, Chunhui Wang, Jian Zhao, Kai Yu, and Xie Chen. 2024. F5-tts: A fairytaler that fakes fluent and faithful speech with flow matching. arXiv preprint arXiv:2410.06885.
- E2-TTS: Sefik Emre Eskimez, Xiaofei Wang, Manthan Thakker, Canrun Li, Chung-Hsien Tsai, Zhen Xiao, Hemin Yang, Zirun Zhu, Min Tang, Xu Tan, et al. 2024. E2 tts: Embarrassingly easy fully non-autoregressive zero-shot tts. In Proc. SLT, Macao. (implemented by F5-TTS)
- OpenVoice V1/V2: Zengyi Qin, Wenliang Zhao, Xumin Yu, and Xin Sun. 2023. Openvoice: Versatile instant voice cloning. arXiv preprint arXiv:2312.01479.
- Parler-TTS Mini/Large: https://github.com/huggingface/parler-tts
- VoiceCraft-830M: Puyuan Peng, Po-Yao Huang, Shang-Wen Li, Abdelrahman Mohamed, and David Harwath. 2024. VoiceCraft: Zero-shot speech editing and text-to-speech in the wild. In Proc. ACL, Bangkok.
Noise:
Synthesized text:
Speaker for zero-shot TTS:
π License:
Creative Commons AttributionNonCommercial-ShareAlike 4.0
π Citation
If you use QualiSpeech in your work, please cite:
@inproceedings{wang2025qualispeech,
title={QualiSpeech: A Speech Quality Assessment Dataset with Natural Language Reasoning and Descriptions},
author={Siyin Wang and Wenyi Yu and Xianzhao Chen and Xiaohai Tian and Jun Zhang and Lu Lu and Yu Tsao and Junichi Yamagishi and Yuxuan Wang and Chao Zhang},
year={2025},
booktitle={Proc. ACL},
address={Vienna}
}
- Downloads last month
- 12