T
Tobias Bocklet
Researcher at Intel
Publications - 98
Citations - 1506
Tobias Bocklet is an academic researcher from Intel. The author has contributed to research in topics: Speaker recognition & Computer science. The author has an hindex of 19, co-authored 86 publications receiving 1286 citations. Previous affiliations of Tobias Bocklet include University of Erlangen-Nuremberg & SRI International.
Papers
More filters
Proceedings ArticleDOI
Improvement of a speech recognizer for standardized medical assessment of children's speech by integration of prior knowledge
TL;DR: This work improved their automatic speech recognition system by integrating prior knowledge and took the first step towards a speech assessment on word and subword level.
Patent
Systems and methods for energy efficient and low power distributed automatic speech recognition on wearable devices
Binuraj K. Ravindran,Francis M. Tharappel,Prabhakar R. Datta,Tobias Bocklet,Maciej Muchlinski,Dorau Tomasz,Josef Bauer,Saurin Shah,Georg Stemmer +8 more
TL;DR: In this article, a quantizer is used to process audio features using a quantization process to reduce the audio features to generate a reduced set of audio features for transmission over a low-energy communication channel for processing.
Book ChapterDOI
Text-independent speaker identification using temporal patterns
TL;DR: This work presents an approach for text-independent speaker recognition using a Universal Background Model built by the EM-Algorithm and all available training data and Gaussian Mixture Models with different numbers of densities.
Journal ArticleDOI
Sprachverständlichkeit und Krankheitsverarbeitung nach der Therapie von Mundhöhlenkarzinomen
M. Sambale,Maria Schuster,Tobias Bocklet,Andreas Maier,Ulrich Eysholdt,A. Ströbele,Florian Stelzle +6 more
TL;DR: Reduced speech intelligibility after multimodal treatment of oral cancer is associated with a change of coping strategy, which includes not only communication-based strategies but also intra-psychic processes like rumination.
Proceedings ArticleDOI
Ultra-Compact NLU: Neuronal Network Binarization as Regularization.
TL;DR: A technique to train neuronal networks where the final neuronal network weights are binary, which enables memory bandwidth optimized inference and efficient computation even on constrained/embedded platforms is described.