Method behind transcription of overlapping examples
Hi
I notice that you in this iteration you include whether an example has overlapping speech from multiple people, nice!
I would like to know the methodology behind which of the speakers are transcribed in the labels, as I notice that not all the overlapping speech appears.
Is it just whoever speaks first is labeled, and the rest is ignored as noise?
Hi
All conversation data have been manually transcribed. The procedure for overlapping speech have been to produce two transcriptions: One focusing on speaker A's utterance and one focusing on speaker B's utterance. In the subsequent data processing the audio have been cut up according to the start and end time for each transcribed utterance.
Thus overlapping speech will result in two segments:
- Speaker A's speech is transcribed and speaker B's speech is "noise"
- Speaker B's speech is transcribed and speaker A's speech is "noise"
We aim to make an additional dataset release, which will contain the full conversations (with redacted personal information) and the corresponding transcriptions. This will allow people to make their own segmentation strategies.
Alright, so without having the information of the original conversation structure, but only looking at the examples, one could say that the person talking at the very start of each example is transcribed, and the rest are noise?
EDIT:
or rather, whoever is speaking throughout the entire example is the one who is transcribed (as it is an entire utterance), and the rest is noise.