Browsing Centre for Speech Technology Research (CSTR) by Date Accessioned
Now showing items 1-20 of 57
-
Acted clear speech corpus
(LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece, 2013-09-24)Single male native British English talker recorded producing 25 TIMIT sentences in 5 conditions, two natural: (i) quiet, (ii) while the talker listened to high-intensity speech-shaped noise, and three acted: (i) as if to ... -
Sharvard
(LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece, 2013-09-24)Two native Spanish talkers (one male, one female) recorded producing 720 Spanish sentences designed to be the Spanish equivalent of the English language Harvard sentences (thus phonetically balanced across sets of ten sentences). -
DiapixFL
(LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece., 2013-10-01)DiapixFL consists of speakers whose first language (L1) is either English or Spanish solving a "spot-the-difference" task in both their L1 and their second language (L2, which for native English talkers is Spanish, and for ... -
Hurricane natural speech corpus
(LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece, 2013-10-01)Single male native British-English talker recorded producing three speech sets (Harvard sentences, Modified Rhyme Test, news sentences) in quiet and while the talker was listening to speech-shaped noise at 84dB(A). A higher ... -
Repeated Harvard Sentence Prompts corpus version 0.5
(University of Edinburgh, Centre for Speech Technology Research; Cambridge University Engineering Department, 2014-06-19)Studio recording of female native British English talker producing three sets of Harvard sentences (thirty prompts), each prompt repeated forty times. Available both as unprocessed 96 kHz recordings and standardised 16 kHz files. -
Sharvard_IJA
(LISTA Consortium: (i) Language and Speech Laboratory, Universidad del Pais Vasco, Spain and Ikerbasque, Spain; (ii) Centre for Speech Technology Research, University of Edinburgh, UK; (iii) KTH Royal Institute of Technology, Sweden; (iv) Institute of Computer Science, FORTH, Greece, 2014-07-28)Two native Spanish talkers (one male, one female) recorded producing 700 Spanish sentences designed to be the Spanish equivalent of the English language Harvard sentences (thus phonetically balanced across sets of ten ... -
Spoofing and Anti-Spoofing (SAS) corpus v1.0
This dataset is associated with the paper "'SAS: A speaker verification spoofing database containing diverse attacks': presents the first version of a speaker verification spoofing and anti-spoofing database, named SAS ... -
Artificial Personality
This dataset is associated with the paper “Artificial Personality and Disfluency” by Mirjam Wester, Matthew Aylett, Marcus Tomalin and Rasmus Dall published at Interspeech 2015, Dresden. The focus of this paper is ... -
Listening test materials for "Deep neural network context embeddings for model selection in rich-context HMM synthesis"
These are the listening test materials for "Deep neural network context embeddings for model selection in rich-context HMM synthesis". They include the waveforms played to listeners as well as the listeners' responses. -
Superseded - Human vs Machine Spoofing
This Item has been replaced. Please see Wester, M; Wu, Z; Yamagishi, J. (2015). Human vs Machine Spoofing, [dataset]. University of Edinburgh. https://doi.org/10.7488/ds/258. -
Human vs Machine Spoofing
Listening test materials for "Human vs Machine Spoofing Detection on Wideband and Narrowband data." They include lists of the speech material selected from the SAS spoofing database and the listeners' responses. The main ... -
Listening test materials for "A study of speaker adaptation for DNN-based speech synthesis"
The dataset contains the testing stimuli and listeners' MUSHRA test responses for the Interspeech 2015 paper, "A study of speaker adaptation for DNN-based speech synthesis". In this paper, we conduct an experimental analysis ... -
Experiment materials for "The temporal delay hypothesis: Natural, vocoded and synthetic speech."
Including disfluencies in synthetic speech is being explored as a way of making synthetic speech sound more natural and conversational. How to measure whether the resulting speech is actually more natural, however, is not ... -
Experiment materials for "Disfluencies in change detection in natural, vocoded and synthetic speech."
The current dataset is associated with the DiSS paper "Disfluencies in change detection in natural, vocoded and synthetic speech." In this paper we investigate the effect of filled pauses, a discourse marker and silent ... -
Listening test materials for "Multiple Feed-forward Deep Neural Networks for Statistical Parametric Speech Synthesis"
In the paper which this data accompanies, we investigate a combination of several feed-forward deep neural networks (DNNs) for a high-quality statistical parametric speech synthesis system. Recently, DNNs have significantly ... -
Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2015) Database
The database has been used in the first Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof 2015). Genuine speech is collected from 106 speakers (45 male, 61 female) and with no significant channel ... -
Listening test materials for "Deep neural network-guided unit selection synthesis"
These are the listening test materials for "Deep neural network-guided unit selection synthesis". They include the waveforms played to listeners as well as the listeners' responses. -
Experiment materials for "Testing the consistency assumption: pronunciation variant forced alignment in read and spontaneous speech synthesis"
The matlab scripts are used to analyse the results files in the results folder. The Test_Wavs are the wavfiles used for the listening test divided by group and the pre-test test files. -
Listening test materials for "Smooth Talking: Articulatory Join Costs for Unit Selection"
This is the listening test data for the experiment presented in the ICASSP 2016 paper "Smooth Talking: Articulatory Join Costs for Unit Selection", which proposes and evaluates computation of unit selection join costs in ... -
Listening test materials for "From HMMs to DNNs: Where do the improvements come from?"
This data release contains listening test materials associated with the paper "From HMMs to DNNs: Where do the improvements come from?", presented at ICASSP 2016 in Shanghai, China.