scispace - formally typeset
Search or ask a question
Topic

Word error rate

About: Word error rate is a research topic. Over the lifetime, 11939 publications have been published within this topic receiving 298031 citations.


Papers
More filters
PatentDOI
Jebu Jacob Rajan1
TL;DR: In this article, a system for allowing a user to add word models to a speech recognition system is described. But this system requires the user to input a number of renditions of a new word and generate from these a sequence of phonemes representative of the new word.
Abstract: A system is provided for allowing a user to add word models to a speech recognition system. In particular, the system allows a user to input a number of renditions of the new word and which generates from these a sequence of phonemes representative of the new word. This representative sequence of phonemes is stored in a word to phoneme dictionary together with the typed version of the word for subsequent use by the speech recognition system.

166 citations

Proceedings ArticleDOI
Samy Bengio1, Georg Heigold1
14 Sep 2014
TL;DR: This work presents here an alternative construction, where words are projected into a continuous embedding space where words that sound alike are nearby in the Euclidean sense, and shows how embeddings can still allow to score words that were not in the training dictionary.
Abstract: Speech recognition systems have used the concept of states as a way to decompose words into sub-word units for decades. As the number of such states now reaches the number of words used to train acoustic models, it is interesting to consider approaches that relax the assumption that words are made of states. We present here an alternative construction, where words are projected into a continuous embedding space where words that sound alike are nearby in the Euclidean sense. We show how embeddings can still allow to score words that were not in the training dictionary. Initial experiments using a lattice rescoring approach and model combination on a large realistic dataset show improvements in word error rate.

166 citations

Proceedings ArticleDOI
M. Wenk1, Martin Zellweger1, Andreas Burg1, Norbert Felber1, Wolfgang Fichtner1 
21 May 2006
TL;DR: In this paper, a parallel implementation of the K-best algorithm for MIMO systems is presented, which achieves up to 424 Mbps throughput with an area that is almost on par with current state-of-the-art implementations.
Abstract: From an error rate performance perspective, maximum likelihood (ML) detection is the preferred detection method for multiple-input multiple-output (MIMO) communication systems. However, for high transmission rates a straight forward exhaustive search implementation suffers from prohibitive complexity. The K-best algorithm provides close-to-ML bit error rate (BER) performance, while its circuit complexity is reduced compared to an exhaustive search. In this paper, a new VLSI architecture for the implementation of the K-best algorithm is presented. Instead of the mostly sequential processing that has been applied in previous VLSI implementations of the algorithm, the presented solution takes a more parallel approach. Furthermore, the application of a simplified norm is discussed. The implementation in an ASIC achieves up to 424 Mbps throughput with an area that is almost on par with current state-of-the-art implementations.

166 citations

Journal ArticleDOI
TL;DR: A new minimum recognition error formulation and a generalized probabilistic descent (GPD) algorithm are analyzed and used to accomplish discriminative training of a conventional dynamic-programming-based speech recognizer.
Abstract: A new minimum recognition error formulation and a generalized probabilistic descent (GPD) algorithm are analyzed and used to accomplish discriminative training of a conventional dynamic-programming-based speech recognizer. The objective of discriminative training here is to directly minimize the recognition error rate. To achieve this, a formulation that allows controlled approximation of the exact error rate and renders optimization possible is used. The GPD method is implemented in a dynamic-time-warping (DTW)-based system. A linear discriminant function on the DTW distortion sequence is used to replace the conventional average DTW path distance. A series of speaker-independent recognition experiments using the highly confusible English E-set as the vocabulary showed a recognition rate of 84.4% compared to approximately 60% for traditional template training via clustering. The experimental results verified that the algorithm converges to a solution that achieves minimum error rate. >

165 citations

DOI
01 Jan 2008
TL;DR: This research attempts to find a measure that like perplexity is easily calculated but which better predicts speech recognition performance and finds that perplexity correlates with word-error rate remarkably well when only considering n-gram models trained on in-domain data.
Abstract: The most widely-used evaluation metric for language models for speech recognition is the perplexity of test data. While perplexities can be calculated efficiently and without access to a speech recognizer, they often do not correlate well with speech recognition word-error rates. In this research, we attempt to find a measure that like perplexity is easily calculated but which better predicts speech recognition performance. We investigate two approaches; first, we attempt to extend perplexity by using similar measures that utilize information about language models that perplexity ignores. Second, we attempt to imitate the word-error calculation without using a speech recognizer by artificially generating speech recognition lattices. To test our new metrics, we have built over thirty varied language models. We find that perplexity correlates with word-error rate remarkably well when only considering n-gram models trained on in-domain data. When considering other types of models, our novel metrics are superior to perplexity for predicting speech recognition performance. However, we conclude that none of these measures predict word-error rate sufficiently accurately to be effective tools for language model evaluation in speech recognition.

165 citations


Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023271
2022562
2021640
2020643
2019633
2018528