scispace - formally typeset
Search or ask a question
Topic

Word error rate

About: Word error rate is a research topic. Over the lifetime, 11939 publications have been published within this topic receiving 298031 citations.


Papers
More filters
Proceedings ArticleDOI
15 Oct 1996
TL;DR: It is shown that a Markov approximation for the block error process is a very good model for a broad range of parameters and this observation leads to a unified approach for the channel modelling which simplifies the performance analysis of upper-layer protocols.
Abstract: We investigate the behavior of block errors which arise in data transmission on fading channels. Our approach is more detailed than previous studies, in that it takes into account the specific coding/modulation scheme and it tracks the fading process symbol by symbol. It is shown that a Markov approximation for the block error process (possibly degenerating into an i.i.d. process for sufficiently fast fading) is a very good model for a broad range of parameters. Also, it is observed that the relationship between the marginal error rate and the transition probability is largely insensitive to parameters such as block length, degree of forward error correction and modulation format, and only depends on an appropriately normalized version of the Doppler frequency. This observation leads to a unified approach for the channel modelling which simplifies the performance analysis of upper-layer protocols.

63 citations

Journal ArticleDOI
TL;DR: In this paper, the average symbol error rate for subcarrier intensity modulated wireless optical communication systems employing general order rectangular quadrature amplitude modulation was studied for three different turbulence channel models, i.e., the Gamma-Gamma channel, the K-distributed channel and the negative exponential channel with different levels of turbulence.
Abstract: The average symbol error rate is studied for subcarrier intensity modulated wireless optical communication systems employing general order rectangular quadrature amplitude modulation. We consider three different turbulence channel models, i.e., the Gamma-Gamma channel, the K-distributed channel, and the negative exponential channel with different levels of turbulence. Closed-form error rate expressions are derived using a series expansion of the modified Bessel function. In addition, detailed truncation error analysis and asymptotic error rate analysis are also presented. Numerical results demonstrate that our series solutions are highly accurate and efficient.

63 citations

Posted Content
TL;DR: This article used shallow fusion with an external language model at inference time to improve the performance of a competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.
Abstract: Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with a neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.

63 citations

Journal ArticleDOI
TL;DR: The authors have found that using more relaxed decoding constraints in preparing N-best hypotheses yields better recognition results, and a new frame-level loss function is minimized to improve the separation between the correct and incorrect hypotheses.
Abstract: The authors propose an N-best candidates-based discriminative training procedure for constructing high-performance HMM speech recognizers. The algorithm has two distinct features: N-best hypotheses are used for training discriminative models; and a new frame-level loss function is minimized to improve the separation between the correct and incorrect hypotheses. The N-best candidates are decoded based on their recently proposed tree-trellis fast search algorithm. The new frame-level loss function, which is defined as a halfwave rectified log-likelihood difference between the correct and competing hypotheses, is minimized over all training tokens. The minimization is carried out by adjusting the HMM parameters along a gradient descent direction. Two speech recognition applications have been tested, including a speaker independent, small vocabulary (ten Mandarin Chinese digits), continuous speech recognition, and a speaker-trained, large vocabulary (5000 commonly used Chinese words), isolated word recognition. Significant performance improvement over the traditional maximum likelihood trained HMMs has been obtained. In the connected Chinese digit recognition experiment, the string error rate is reduced from 17.0 to 10.8% for unknown length decoding and from 8.2 to 5.2% for known length decoding. In the large vocabulary, isolated word recognition experiment, the recognition error rate is reduced from 7.2 to 3.8%. Additionally, they have found that using more relaxed decoding constraints in preparing N-best hypotheses yields better recognition results. >

63 citations

Journal ArticleDOI
TL;DR: A method for upgrading initially simple pronunciation models to new models that can explain several pronunciation variants of each word, and the introduction of such variants in a segment-based recognizer significantly improves the recognition accuracy.

63 citations


Network Information
Related Topics (5)
Deep learning
79.8K papers, 2.1M citations
88% related
Feature extraction
111.8K papers, 2.1M citations
86% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Artificial neural network
207K papers, 4.5M citations
84% related
Cluster analysis
146.5K papers, 2.9M citations
83% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023271
2022562
2021640
2020643
2019633
2018528