R
Roland Thiolliere
Researcher at École Normale Supérieure
Publications - 6
Citations - 307
Roland Thiolliere is an academic researcher from École Normale Supérieure. The author has contributed to research in topics: Time delay neural network & Recurrent neural network. The author has an hindex of 4, co-authored 6 publications receiving 278 citations. Previous affiliations of Roland Thiolliere include PSL Research University.
Papers
More filters
Proceedings ArticleDOI
The Zero Resource Speech Challenge 2015
Maarten Versteegh,Roland Thiolliere,Thomas Schatz,Xuan Nga Cao,Xavier Anguera,Aren Jansen,Emmanuel Dupoux +6 more
TL;DR: The Interspeech 2015 Zero Resource Speech Challenge aims at discovering subword and word units from raw speech The challenge provides the first unified and open source suite of evaluation metrics and data sets to compare and analyse the results of unsupervised linguistic unit discovery algorithms as discussed by the authors.
Proceedings ArticleDOI
A hybrid dynamic time warping-deep neural network architecture for unsupervised acoustic modeling.
TL;DR: An architecture for the unsupervised discovery of talker-invariant subword embeddings using a dynamic-time warping based spoken term discovery system and a Siamese deep neural network.
Journal ArticleDOI
WordSeg: Standardizing unsupervised word form segmentation from text
Mathieu Bernard,Mathieu Bernard,Roland Thiolliere,Amanda Saksida,Georgia R. Loukatou,Elin Larsen,Elin Larsen,Mark Johnson,Laia Fibla,Laia Fibla,Emmanuel Dupoux,Emmanuel Dupoux,Robert Daland,Xuan Nga Cao,Xuan Nga Cao,Alejandrina Cristia +15 more
TL;DR: This work has created a tool that is open source, enables reproducible results, and encourages cumulative science in this domain, and can work as an open-source platform, to which other researchers can add their own segmentation algorithms.
Posted Content
Improving Phoneme segmentation with Recurrent Neural Networks.
TL;DR: This work proposes a novel unsupervised algorithm based on sequence prediction models such as Markov chains and recurrent neural network that tries to learn the dynamics of speech in the MFCC space and hypothesize boundaries from local maxima in the prediction error.
Proceedings ArticleDOI
The “language filter” hypothesis: A feasibility study of language separation in infancy using unsupervised clustering of I-vectors
TL;DR: This work uses speech technology tools in combination with unsupervised clustering to test language separation using speech from several speakers of two languages, and investigates the outcome of the clustering as a function of the variability of language experience, and the availability of side information.