Centre for Speech Technology Research (CSTR)
Browse by
The Centre for Speech Technology Research (CSTR) is an interdisciplinary research centre linking Informatics and Linguistics and English Language .
Founded in 1984, CSTR is concerned with research in all areas of speech technology including speech recognition, speech synthesis, speech signal processing, information access, multimodal interfaces and dialogue systems. We have many collaborations with the wider community of researchers in speech science, language, cognition and machine learning for which Edinburgh is renowned.
Collections in this community
-
SALB project
Synthesis of Fast Speech/Speech Synthesis of Auditive:Lecture Books (SALB) -
The Voice Conversion Challenge
Development of speaker conversion systems -
UltraSuite
A repository of ultrasound and acoustic data from child speech therapy sessions -
VCTK
Voice Cloning Toolkit
Recent Submissions
-
Listening-test materials for "Where do the improvements come from in sequence-to-sequence neural TTS?"
This data release contains listening-test materials associated with the paper "Where do the improvements come from in sequence-to-sequence neural TTS?", presented at SSW10 (the 10th ISCA Speech Synthesis Workshop) in Vienna, ... -
CSTR VCTK Corpus: English Multi-speaker Corpus for CSTR Voice Cloning Toolkit (version 0.92)
This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation ... -
ASVspoof 2019: The 3rd Automatic Speaker Verification Spoofing and Countermeasures Challenge database
This is a database used for the Third Automatic Speaker Verification Spoofing and Countermeasures Challenge, for short, ASVspoof 2019 (http://www.asvspoof.org) organized by Junichi Yamagishi, Massimiliano Todisco, Md ... -
Listening-test materials for "Modern speech synthesis for phonetic sciences: a discussion and an evaluation"
This data release contains listening-test materials associated with the paper "Modern speech synthesis for phonetic sciences: a discussion and an evaluation", presented at ICPhS 2019 in Melbourne, Australia. -
Alba speech corpus
Single speaker read speech corpus of a Scottish accented female native English speaker (Alba). The corpus was recorded in four speaking styles: plain (normal read speech, around 4 hours of recordings), fast (speaking as ... -
Listening test results of the Voice Conversion Challenge 2018
This dataset is associated with a paper and a dataset below: (1) Jaime Lorenzo-Trueba, Junichi Yamagishi, Tomoki Toda, Daisuke Saito, Fernando Villavicencio, Tomi Kinnunen, Zhenhua Ling, "The Voice Conversion Challenge ... -
UltraSuite Repository - sample data
UltraSuite is a repository of ultrasound and acoustic data from child speech therapy sessions. The current release includes three data collections, one from typically developing children -- Ultrax Typically Developing ... -
Hurricane natural speech corpus - higher quality version
Single male native British-English talker recorded producing three speech sets (Harvard sentences, Modified Rhyme Test, news sentences) in quiet and while the talker was listening to speech-shaped noise at 84dB(A). This ... -
Parallel Audiobook Corpus
The Parallel Audiobook Corpus (version 1.0) is a collection of parallel readings of audiobooks. The corpus consists of approximately 121 hours of speech at 22.05KHz across 4 books and 59 speakers. The data is provided in ... -
Manual and automatic labels for version 1.0 of UXTD, UXSSD, and UPX core data -- version 1.0
UltraSuite is a repository of ultrasound and acoustic data from child speech therapy sessions. The current release includes three data collections, one from typically developing children (UXTD) and two from children with ... -
The Voice Conversion Challenge 2018: database and results
Voice conversion (VC) is a technique to transform a speaker identity included in a source speech waveform into a different one while preserving linguistic information of the source speech waveform. In 2016, we have ... -
The 2nd Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2017) Database, Version 2
This is a database used for the Second Automatic Speaker Verification Spoofing and Countermeasuers Challenge, for short, ASVspoof 2017 (http://www.asvspoof.org) organized by Tomi Kinnunen, Md Sahidullah, Héctor Delgado, ... -
Device Recorded VCTK (Small subset version)
This dataset is a new variant of the voice cloning toolkit (VCTK) dataset: device-recorded VCTK (DR-VCTK), where the high-quality speech signals recorded in a semi-anechoic chamber using professional audio devices are ... -
SUPERSEDED - The 2nd Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2017) Database, Version 2
## This item has been replaced by the one which can be found at https://doi.org/10.7488/ds/2332 ## -
SUPERSEDED - The 2nd Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2017) Database, Version 2
## This item has been replaced by the one which can be found at https://doi.org/10.7488/ds/2332 ## -
Dutch English Lombard Speech Native and Non-Native (DELNN)
The DELNN (Dutch English Lombard speech Native and Non-Native) corpus consists of 30 native Dutch speakers reading sentences in a quiet environment and in a noisy environment, to elicit Lombard speech. The Dutch speakers ... -
Radboud Lombard Corpus_Dutch
This data set contains 54 (12 for now) native Dutch speakers' Dutch sentence-reading material (48 sentences in natural and 48 sentences in Lombard condition per speaker). -
SUPERSEDED - Device Recorded VCTK (Small subset version)
## This item has been replaced by the one which can be found at https://doi.org/10.7488/ds/2316 ## This dataset is a new variant of the voice cloning toolkit (VCTK) dataset: device-recorded VCTK (DR-VCTK), where the ... -
Noisy reverberant speech database for training speech enhancement algorithms and TTS models
Noisy reverberant speech database. The database was designed to train and test speech enhancement (noise suppression and dereverberation) methods that operate at 48kHz. Clean speech was made reverberant and noisy by ... -
Noisy speech database for training speech enhancement algorithms and TTS models
Clean and noisy parallel speech database. The database was designed to train and test speech enhancement methods that operate at 48kHz. A more detailed description can be found in the papers associated with the database. ...