scispace - formally typeset
M

Martin Sundermeyer

Researcher at RWTH Aachen University

Publications -  24
Citations -  3205

Martin Sundermeyer is an academic researcher from RWTH Aachen University. The author has contributed to research in topics: Language model & Recurrent neural network. The author has an hindex of 14, co-authored 19 publications receiving 2644 citations.

Papers
More filters
Proceedings ArticleDOI

LSTM Neural Networks for Language Modeling.

TL;DR: This work analyzes the Long Short-Term Memory neural network architecture on an English and a large French language modeling task and gains considerable improvements in WER on top of a state-of-the-art speech recognition system.
Journal ArticleDOI

From feedforward to recurrent LSTM neural networks for language modeling

TL;DR: This paper compares count models to feedforward, recurrent, and long short-term memory (LSTM) neural network variants on two large-vocabulary speech recognition tasks, and analyzes the potential improvements that can be obtained when applying advanced algorithms to the rescoring of word lattices on large-scale setups.
Proceedings ArticleDOI

Translation Modeling with Bidirectional Recurrent Neural Networks

TL;DR: This work presents phrase-based translation models that are more consistent with phrasebased decoding and introduces bidirectional recurrent neural models to the problem of machine translation, allowing the full source sentence to be used in the models.
Proceedings ArticleDOI

Comparison of feedforward and recurrent neural network language models

TL;DR: A simple and efficient method to normalize language model probabilities across different vocabularies is proposed, and it is shown how to speed up training of recurrent neural networks by parallelization.

RASR - The RWTH Aachen University Open Source Speech Recognition Toolkit

Abstract: RASR is the open source version of the well-proven speech recognition toolkit developed and used at RWTH Aachen University. The current version of the package includes state of the art speech recognition technology for acoustic model training and decoding. Speaker adaptation, speaker adaptive training, unsupervised training, discriminative training, lattice processing tools, flexible signal analysis, a finite state automata library, and an efficient dynamic network decoder are notable components. Comprehensive documentation, example setups for training and recognition, and tutorials are provided to support newcomers.