scispace - formally typeset
Search or ask a question

Showing papers on "Word error rate published in 2018"


Proceedings ArticleDOI
18 Jun 2018
TL;DR: The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN, and can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset.
Abstract: Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10A— more layers. The source code for the complete system are publicly available1.

1,235 citations


Proceedings ArticleDOI
15 Apr 2018
TL;DR: In this article, the authors explore a variety of structural and optimization improvements to the Listen, Attend, and Spell (LAS) encoder-decoder architecture, which significantly improves performance.
Abstract: Attention-based encoder-decoder architectures such as Listen, Attend, and Spell (LAS), subsume the acoustic, pronunciation and language model components of a traditional automatic speech recognition (ASR) system into a single neural network. In previous work, we have shown that such architectures are comparable to state-of-the-art ASR systems on dictation tasks, but it was not clear if such architectures would be practical for more challenging tasks such as voice search. In this work, we explore a variety of structural and optimization improvements to our LAS model which significantly improve performance. On the structural side, we show that word piece models can be used instead of graphemes. We also introduce a multi-head attention architecture, which offers improvements over the commonly-used single-head attention. On the optimization side, we explore synchronous training, scheduled sampling, label smoothing, and minimum word error rate optimization, which are all shown to improve accuracy. We present results with a unidirectional LSTM encoder for streaming recognition. On a 12, 500 hour voice search task, we find that the proposed changes improve the WER from 9.2% to 5.6%, while the best conventional system achieves 6.7%; on a dictation task our model achieves a WER of 4.1% compared to 5% for the conventional system.

885 citations


Proceedings ArticleDOI
15 Apr 2018
TL;DR: The Speech-Transformer is presented, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency and a 2D-Attention mechanism which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech- Transformer.
Abstract: Recurrent sequence-to-sequence models using encoder-decoder architecture have made great progress in speech recognition task. However, they suffer from the drawback of slow training speed because the internal recurrence limits the training parallelization. In this paper, we present the Speech-Transformer, a no-recurrence sequence-to-sequence model entirely relies on attention mechanisms to learn the positional dependencies, which can be trained faster with more efficiency. We also propose a 2D-Attention mechanism, which can jointly attend to the time and frequency axes of the 2-dimensional speech inputs, thus providing more expressive representations for the Speech-Transformer. Evaluated on the Wall Street Journal (WSJ) speech recognition dataset, our best model achieves competitive word error rate (WER) of 10.9%, while the whole training process only takes 1.2 days on 1 GPU, significantly faster than the published results of recurrent sequence-to-sequence models.

771 citations


Journal ArticleDOI
TL;DR: This paper discusses how to test multiple hypotheses simultaneously while limiting type I error rate, which is caused by α inflation, and the differences between MCTs and apply them appropriately.
Abstract: Multiple comparisons tests (MCTs) are performed several times on the mean of experimental conditions. When the null hypothesis is rejected in a validation, MCTs are performed when certain experimental conditions have a statistically significant mean difference or there is a specific aspect between the group means. A problem occurs if the error rate increases while multiple hypothesis tests are performed simultaneously. Consequently, in an MCT, it is necessary to control the error rate to an appropriate level. In this paper, we discuss how to test multiple hypotheses simultaneously while limiting type I error rate, which is caused by α inflation. To choose the appropriate test, we must maintain the balance between statistical power and type I error rate. If the test is too conservative, a type I error is not likely to occur. However, concurrently, the test may have insufficient power resulted in increased probability of type II error occurrence. Most researchers may hope to find the best way of adjusting the type I error rate to discriminate the real differences between observed data without wasting too much statistical power. It is expected that this paper will help researchers understand the differences between MCTs and apply them appropriately.

446 citations


Proceedings ArticleDOI
Wayne Xiong1, Lingfeng Wu1, Fileno A. Alleva1, Jasha Droppo1, Xuedong Huang1, Andreas Stolcke1 
15 Apr 2018
TL;DR: The latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains is described, which adds a CNN-BLSTM acoustic model to the set of model architectures combined previously, and includes character-based and dialog session aware LSTM language models in rescoring.
Abstract: We describe the latest version of Microsoft's conversational speech recognition system for the Switchboard and CallHome domains. The system adds a CNN-BLSTM acoustic model to the set of model architectures we combined previously, and includes character-based and dialog session aware LSTM language models in rescoring. For system combination we adopt a two-stage approach, whereby acoustic model posteriors are first combined at the senone/frame level, followed by a word-level voting via confusion networks. We also added another language model rescoring step following the confusion network combination. The resulting system yields a 5.1% word error rate on the NIST 2000 Switchboard test set, and 9.8% on the CallHome subset.

386 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a sending or not sending (Sending or Not sending) protocol based on the twin-field quantum key distribution (TF-QKD), which can tolerate large misalignment error.
Abstract: Based on the novel idea of twin-field quantum key distribution [TF-QKD; Lucamarini et al., Nature (London) 557, 400 (2018)], we present a protocol named the ``sending or not sending TF-QKD'' protocol, which can tolerate large misalignment error. A revolutionary theoretical breakthrough in quantum communication, TF-QKD changes the channel-loss dependence of the key rate from linear to square root of channel transmittance. However, it demands the challenging technology of long-distance single-photon interference, and also, as stated in the original paper, the security proof was not finalized there due to the possible effects of the later announced phase information. Here we show by a concrete eavesdropping scheme that the later phase announcement does have important effects and the traditional formulas of the decoy-state method do not apply to the original protocol. We then present our ``sending or not sending'' protocol. Our protocol does not take postselection for the bits in $Z$-basis (signal pulses), and hence the traditional decoy-state method directly applies and automatically resolves the issue of security proof. Most importantly, our protocol presents a negligibly small error rate in $Z$-basis because it does not request any single-photon interference in this basis. Thus our protocol greatly improves the tolerable threshold of misalignment error in single-photon interference from the original a few percent to more than $45%$. As shown numerically, our protocol exceeds a secure distance of 700, 600, 500, or 300 km even though the single-photon interference misalignment error rate is as large as $15%, 25%, 35%$, or $45%$.

266 citations


Posted Content
TL;DR: This work investigates training end-to-end speech recognition models with the recurrent neural network transducer (RNN-T) and finds that performance can be improved further through the use of sub-word units ('wordpieces') which capture longer context and significantly reduce substitution errors.
Abstract: We investigate training end-to-end speech recognition models with the recurrent neural network transducer (RNN-T): a streaming, all-neural, sequence-to-sequence architecture which jointly learns acoustic and language model components from transcribed acoustic data. We explore various model architectures and demonstrate how the model can be improved further if additional text or pronunciation data are available. The model consists of an `encoder', which is initialized from a connectionist temporal classification-based (CTC) acoustic model, and a `decoder' which is partially initialized from a recurrent neural network language model trained on text data alone. The entire neural network is trained with the RNN-T loss and directly outputs the recognized transcript as a sequence of graphemes, thus performing end-to-end speech recognition. We find that performance can be improved further through the use of sub-word units (`wordpieces') which capture longer context and significantly reduce substitution errors. The best RNN-T system, a twelve-layer LSTM encoder with a two-layer LSTM decoder trained with 30,000 wordpieces as output targets achieves a word error rate of 8.5\% on voice-search and 5.2\% on voice-dictation tasks and is comparable to a state-of-the-art baseline at 8.3\% on voice-search and 5.4\% voice-dictation.

237 citations


Book ChapterDOI
18 Sep 2018
TL;DR: The TED-LIUM 3 corpus as discussed by the authors was released in 2016, which is dedicated to speech recognition in English, which multiplies the available data to train acoustic models by a factor of more than two.
Abstract: In this paper, we present TED-LIUM release 3 corpus (TED-LIUM 3 is available on https://lium.univ-lemans.fr/ted-lium3/) dedicated to speech recognition in English, which multiplies the available data to train acoustic models in comparison with TED-LIUM 2, by a factor of more than two. We present the recent development on Automatic Speech Recognition (ASR) systems in comparison with the two previous releases of the TED-LIUM Corpus from 2012 and 2014. We demonstrate that, passing from 207 to 452 h of transcribed speech training data is really more useful for end-to-end ASR systems than for HMM-based state-of-the-art ones. This is the case even if the HMM-based ASR system still outperforms the end-to-end ASR system when the size of audio training data is 452 h, with a Word Error Rate (WER) of 6.7% and 13.7%, respectively. Finally, we propose two repartitions of the TED-LIUM release 3 corpus: the legacy repartition that is the same as that existing in release 2, and a new repartition, calibrated and designed to make experiments on speaker adaptation. Similar to the two first releases, TED-LIUM 3 corpus will be freely available for the research community.

208 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of temporal convolution, in the form of time-delay neural network (TDNN) layers, along with unidirectional LSTM layers to limit the latency to 200 ms.
Abstract: Bidirectional long short-term memory (BLSTM) acoustic models provide a significant word error rate reduction compared to their unidirectional counterpart, as they model both the past and future temporal contexts. However, it is nontrivial to deploy bidirectional acoustic models for online speech recognition due to an increase in latency. In this letter, we propose the use of temporal convolution, in the form of time-delay neural network (TDNN) layers, along with unidirectional LSTM layers to limit the latency to 200 ms. This architecture has been shown to outperform the state-of-the-art low frame rate (LFR) BLSTM models. We further improve these LFR BLSTM acoustic models by operating them at higher frame rates at lower layers and show that the proposed model performs similar to these mixed frame rate BLSTMs. We present results on the Switchboard 300 h LVCSR task and the AMI LVCSR task, in the three microphone conditions.

181 citations


Proceedings ArticleDOI
12 Apr 2018
TL;DR: The authors explored different topologies and their variants of the attention layer, and compared different pooling methods on the attention weights, and showed that attention-based models can improve the Equal Error Rate (EER) of speaker verification system by relatively 14% compared to a non-attention LSTM baseline model.
Abstract: Attention-based models have recently shown great performance on a range of tasks, such as speech recognition, machine translation, and image captioning due to their ability to summarize relevant information that expands through the entire length of an input sequence. In this paper, we analyze the usage of attention mechanisms to the problem of sequence summarization in our end-to-end text-dependent speaker recognition system. We explore different topologies and their variants of the attention layer, and compare different pooling methods on the attention weights. Ultimately, we show that attention-based models can improves the Equal Error Rate (EER) of our speaker verification system by relatively 14% compared to our non-attention LSTM baseline model.

174 citations


Proceedings ArticleDOI
15 Apr 2018
TL;DR: This work demonstrates that the use of shallow fusion with an neural LM with wordpieces yields a 9.1% relative word error rate reduction over the authors' competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring on Google Voice Search.
Abstract: Attention-based sequence-to-sequence models for automatic speech recognition jointly train an acoustic model, language model, and alignment mechanism. Thus, the language model component is only trained on transcribed audio-text pairs. This leads to the use of shallow fusion with an external language model at inference time. Shallow fusion refers to log-linear interpolation with a separately trained language model at each step of the beam search. In this work, we investigate the behavior of shallow fusion across a range of conditions: different types of language models, different decoding units, and different tasks. On Google Voice Search, we demonstrate that the use of shallow fusion with an neural LM with wordpieces yields a 9.1% relative word error rate reduction (WERR) over our competitive attention-based sequence-to-sequence model, obviating the need for second-pass rescoring.

Proceedings ArticleDOI
06 Jun 2018
TL;DR: In this paper, a large-scale dataset is proposed to predict perceptual image error like human observers, and a deep learning model is trained using pairwise learning to predict the preference of one distorted image over another.
Abstract: The ability to estimate the perceptual error between images is an important problem in computer vision with many applications. Although it has been studied extensively, however, no method currently exists that can robustly predict visual differences like humans. Some previous approaches used hand-coded models, but they fail to model the complexity of the human visual system. Others used machine learning to train models on human-labeled datasets, but creating large, high-quality datasets is difficult because people are unable to assign consistent error labels to distorted images. In this paper, we present a new learning-based method that is the first to predict perceptual image error like human observers. Since it is much easier for people to compare two given images and identify the one more similar to a reference than to assign quality scores to each, we propose a new, large-scale dataset labeled with the probability that humans will prefer one image over another. We then train a deep-learning model using a novel, pairwise-learning framework to predict the preference of one distorted image over the other. Our key observation is that our trained network can then be used separately with only one distorted image and a reference to predict its perceptual error, without ever being trained on explicit human perceptual-error labels. The perceptual error estimated by our new metric, PieAPP, is well-correlated with human opinion. Furthermore, it significantly outperforms existing algorithms, beating the state-of-the-art by almost 3A— on our test set in terms of binary error rate, while also generalizing to new kinds of distortions, unlike previous learning-based methods.

Journal ArticleDOI
TL;DR: A new neural network architecture that combines a deep convolutional neural network with an encoder–decoder, called sequence to sequence, to solve the problem of recognizing isolated handwritten words to recognize any given word is proposed.

Journal ArticleDOI
TL;DR: This manuscript introduces the end-to-end embedding of a CNN into a HMM, while interpreting the outputs of the CNN in a Bayesian framework, and compares the hybrid modelling to a tandem approach and evaluates the gain of model combination.
Abstract: This manuscript introduces the end-to-end embedding of a CNN into a HMM, while interpreting the outputs of the CNN in a Bayesian framework. The hybrid CNN-HMM combines the strong discriminative abilities of CNNs with the sequence modelling capabilities of HMMs. Most current approaches in the field of gesture and sign language recognition disregard the necessity of dealing with sequence data both for training and evaluation. With our presented end-to-end embedding we are able to improve over the state-of-the-art on three challenging benchmark continuous sign language recognition tasks by between 15 and 38% relative reduction in word error rate and up to 20% absolute. We analyse the effect of the CNN structure, network pretraining and number of hidden states. We compare the hybrid modelling to a tandem approach and evaluate the gain of model combination.

Proceedings ArticleDOI
24 Sep 2018
TL;DR: This paper formulate audio to semantic understanding as a sequence-to-sequence problem, and proposes and compares various encoder-decoder based approaches that optimize both modules jointly, in an end- to-end manner.
Abstract: Conventional spoken language understanding systems consist of two main components: an automatic speech recognition module that converts audio to a transcript, and a natural language understanding module that transforms the resulting text (or top N hypotheses) into a set of domains, intents, and arguments. These modules are typically optimized independently. In this paper, we formulate audio to semantic understanding as a sequence-to-sequence problem [1]. We propose and compare various encoder-decoder based approaches that optimize both modules jointly, in an end-to-end manner. Evaluations on a real-world task show that 1) having an intermediate text representation is crucial for the quality of the predicted semantics, especially the intent arguments and 2) jointly optimizing the full system improves overall accuracy of prediction. Compared to independently trained models, our best jointly trained model achieves similar domain and intent prediction F1 scores, but improves argument word error rate by 18% relative.

Proceedings ArticleDOI
Zhuo Chen1, Xiong Xiao1, Takuya Yoshioka1, Hakan Erdogan1, Jinyu Li1, Yifan Gong1 
18 Dec 2018
TL;DR: This work proposes a simple yet effective method for multi-channel far-field overlapped speech recognition that achieves more than 24% relative word error rate (WER) reduction than fixed beamforming with oracle selection.
Abstract: Although advances in close-talk speech recognition have resulted in relatively low error rates, the recognition performance in far-field environments is still limited due to low signal-to-noise ratio, reverberation, and overlapped speech from simultaneous speakers which is especially more difficult. To solve these problems, beamforming and speech separation networks were previously proposed. However, they tend to suffer from leakage of interfering speech or limited generalizability. In this work, we propose a simple yet effective method for multi-channel far-field overlapped speech recognition. In the proposed system, three different features are formed for each target speaker, namely, spectral, spatial, and angle features. Then a neural network is trained using all features with a target of the clean speech of the required speaker. An iterative update procedure is proposed in which the mask-based beamforming and mask estimation are performed alternatively. The proposed system were evaluated with real recorded meetings with different levels of overlapping ratios. The results show that the proposed system achieves more than 24% relative word error rate (WER) reduction than fixed beamforming with oracle selection. Moreover, as overlap ratio rises from 20% to 70+%, only 3.8% WER increase is observed for the proposed system.

Journal ArticleDOI
TL;DR: A novel model to enhance the recognition accuracy of the short utterance speaker recognition system is proposed using a convolutional neural network to process spectrograms, which can describe speakers better and gains the considerable accuracy as well as the reasonable convergence speed.
Abstract: During the last few years, the speaker recognition technique has been widely attractive for its extensive application in many fields, such as speech communications, domestics services, and smart terminals. As a critical method, the Gaussian mixture model (GMM) makes it possible to achieve the recognition capability that is close to the hearing ability of human in a long speech. However, the GMM is failing to recognize a short utterance speaker with a high accuracy. Aiming at solving this problem, in this paper, we propose a novel model to enhance the recognition accuracy of the short utterance speaker recognition system. Different from traditional models based on the GMM, we design a method to train a convolutional neural network to process spectrograms, which can describe speakers better. Thus, the recognition system gains the considerable accuracy as well as the reasonable convergence speed. The experiment results show that our model can help to decrease the equal error rate of the recognition from 4.9% to 2.5%.

Proceedings ArticleDOI
15 Apr 2018
TL;DR: This work considers two loss functions which approximate the expected number of word errors: either by sampling from the model, or by using N-best lists of decoded hypotheses, which are found to be more effective than the sampling-based method.
Abstract: Sequence-to-sequence models, such as attention-based models in automatic speech recognition (ASR), are typically trained to optimize the cross-entropy criterion which corresponds to improving the log-likelihood of the data. However, system performance is usually measured in terms of word error rate (WER), not log-likelihood. Traditional ASR systems benefit from discriminative sequence training which optimizes criteria such as the state-level minimum Bayes risk (sMBR) which are more closely related to WER. In the present work, we explore techniques to train attention-based models to directly minimize expected word error rate. We consider two loss functions which approximate the expected number of word errors: either by sampling from the model, or by using N-best lists of decoded hypotheses, which we find to be more effective than the sampling-based method. In experimental evaluations, we find that the proposed training procedure improves performance by up to 8.2% relative to the baseline system. This allows us to train grapheme-based, uni-directional attention-based models which match the performance of a traditional, state-of-the-art, discriminative sequence-trained system on a mobile voice-search task.

Book ChapterDOI
TL;DR: The TED-LIUM corpus as mentioned in this paper has been used to train an end-to-end ASR system with a large amount of transcribed speech training data, passing from 207 to 452 hours.
Abstract: In this paper, we present TED-LIUM release 3 corpus dedicated to speech recognition in English, that multiplies by more than two the available data to train acoustic models in comparison with TED-LIUM 2. We present the recent development on Automatic Speech Recognition (ASR) systems in comparison with the two previous releases of the TED-LIUM Corpus from 2012 and 2014. We demonstrate that, passing from 207 to 452 hours of transcribed speech training data is really more useful for end-to-end ASR systems than for HMM-based state-of-the-art ones, even if the HMM-based ASR system still outperforms end-to-end ASR system when the size of audio training data is 452 hours, with respectively a Word Error Rate (WER) of 6.6% and 13.7%. Last, we propose two repartitions of the TED-LIUM release 3 corpus: the legacy one that is the same as the one existing in release 2, and a new one, calibrated and designed to make experiments on speaker adaptation. Like the two first releases, TED-LIUM 3 corpus will be freely available for the research community.

Journal ArticleDOI
TL;DR: In this article, a flexible piezoelectric acoustic sensor (f-PAS) with a highly sensitive multi-resonant frequency band was fabricated by mimicking the operating mechanism of the basilar membrane in the human cochlear.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: In this article, the authors proposed a hybrid CTC/attention architecture for audio-visual recognition of speech in the wild, which leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the state-of-the-art performance on LRS2 database.
Abstract: Recent works in speech recognition rely either on connectionist temporal classification (CTC) or sequence-to-sequence models for character-level recognition. CTC assumes conditional independence of individual characters, whereas attention-based models can provide nonsequential alignments. Therefore, we could use a CTC loss in combination with an attention-based model in order to force monotonic alignments and at the same time get rid of the conditional independence assumption. In this paper, we use the recently proposed hybrid CTC/attention architecture for audio-visual recognition of speech in-the-wild. To the best of our knowledge, this is the first time that such a hybrid architecture architecture is used for audio-visual recognition of speech. We use the LRS2 database and show that the proposed audio-visual model leads to an 1.3% absolute decrease in word error rate over the audio-only model and achieves the new state-of-the-art performance on LRS2 database (7% word error rate). We also observe that the audio-visual model significantly outperforms the audio-based model (up to 32.9% absolute improvement in word error rate) for several different types of noise as the signal-to-noise ratio decreases.

Proceedings ArticleDOI
Kartik Audhkhasi1, Brian Kingsbury1, Bhuvana Ramabhadran1, George Saon1, Michael Picheny1 
15 Apr 2018
TL;DR: In this paper, a joint word-character A2W model was proposed to learn to first spell the word and then recognize it, achieving a word error rate of 8.8%/13.9% on the Hub5-2000 Switchboard/CallHome test sets without any decoder, pronunciation lexicon, or externally-trained language model.
Abstract: Direct acoustics-to-word (A2W) models in the end-to-end paradigm have received increasing attention compared to conventional subword based automatic speech recognition models using phones, characters, or context-dependent hidden Markov model states. This is because A2W models recognize words from speech without any decoder, pronunciation lexicon, or externally-trained language model, making training and decoding with such models simple. Prior work has shown that A2W models require orders of magnitude more training data in order to perform comparably to conventional models. Our work also showed this accuracy gap when using the English Switchboard-Fisher data set. This paper describes a recipe to train an A2W model that closes this gap and is at-par with state-of-the-art sub-word based models. We achieve a word error rate of 8.8.8%/13.9% on the Hub5-2000 Switchboard/CallHome test sets without any decoder or language model. We find that model initialization, training data order, and regularization have the most impact on the A2W model performance. Next, we present a joint word-character A2W model that learns to first spell the word and then recognize it. This model provides a rich output to the user instead of simple word hypotheses, making it especially useful in the case of words unseen or rarely-seen during training.

Proceedings ArticleDOI
04 Mar 2018
TL;DR: DFSMN as mentioned in this paper introduces skip connections between memory blocks in adjacent layers, which enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure.
Abstract: In this paper, we present an improved feedforward sequential memory networks (FSMN) architecture, namely Deep-FSMN (DFSMN), by introducing skip connections between memory blocks in adjacent layers. These skip connections enable the information flow across different layers and thus alleviate the gradient vanishing problem when building very deep structure. As a result, DFSMN significantly benefits from these skip connections and deep structure. We have compared the performance of DFSMN to BLSTM both with and without lower frame rate (LFR) on several large speech recognition tasks, including English and Mandarin. Experimental results shown that DFSMN can consistently outperform BLSTM with dramatic gain, especially trained with LFR using CD-Phone as modeling units. In the 20000 hours Fisher (FSH) task, the proposed DFSMN can achieve a word error rate of 9.4% by purely using the cross-entropy criterion and decoding with a 3-gram language model, which achieves a 1.5% absolute improvement compared to the BLSTM. In a 20000 hours Mandarin recognition task, the LFR trained DFSMN can achieve more than 20% relative improvement compared to the LFR trained BLSTM. Moreover, we can easily design the lookahead filter order of the memory blocks in DFSMN to control the latency for real-time applications.

Proceedings ArticleDOI
01 Dec 2018
TL;DR: The authors compare a suite of past methods and some of their own proposed methods for using unpaired text data to improve encoder-decoder models and find that cold fusion has a lower oracle error rate and outperforms other approaches after second pass rescoring on the Google voice search data set.
Abstract: Attention-based recurrent neural encoder-decoder models present an elegant solution to the automatic speech recognition problem. This approach folds the acoustic model, pronunciation model, and language model into a single network and requires only a parallel corpus of speech and text for training. However, unlike in conventional approaches that combine separate acoustic and language models, it is not clear how to use additional (unpaired) text. While there has been previous work on methods addressing this problem, a thorough comparison among methods is still lacking. In this paper, we compare a suite of past methods and some of our own proposed methods for using unpaired text data to improve encoder-decoder models. For evaluation, we use the medium-sized Switchboard data set and the large-scale Google voice search and dictation data sets. Our results confirm the benefits of using unpaired text across a range of methods and data sets. Surprisingly, for first-pass decoding, the rather simple approach of shallow fusion performs best across data sets. However, for Google data sets we find that cold fusion has a lower oracle error rate and outperforms other approaches after second-pass rescoring on the Google voice search data set.

Proceedings ArticleDOI
Jinyu Li1, Guoli Ye1, Amit Das1, Rui Zhao1, Yifan Gong1 
12 Apr 2018
TL;DR: This work proposes a much better solution by training a mixed-unit CTC model which decomposes all the OOV words into sequences of frequent words and multi-letter units and improves the baseline word-based CTC by relative 12.09% word error rate (WER) reduction when combined with the proposed attention CTC.
Abstract: The acoustic-to-word model based on the connectionist temporal classification (CTC) criterion was shown as a natural end-to-end (E2E) model directly targeting words as output units. However, the word-based CTC model suffers from the out-of-vocabulary (OOV) issue as it can only model limited number of words in the output layer and maps all the remaining words into an OOV output node. Hence, such a word-based CTC model can only recognize the frequent words modeled by the network output nodes. Our first attempt to improve the acoustic-to-word model is a hybrid CTC model which consults a letter-based CTC when the word-based CTC model emits OOV tokens during testing time. Then, we propose a much better solution by training a mixed-unit CTC model which decomposes all the OOV words into sequences of frequent words and multi-letter units. Evaluated on a 3400 hours Microsoft Cortana voice assistant task, the final acoustic-to-word solution improves the baseline word-based CTC by relative 12.09% word error rate (WER) reduction when combined with our proposed attention CTC. Such an E2E model without using any language model (LM) or complex decoder outperforms the traditional context-dependent phoneme CTC which has strong LM and decoder by relative 6.79%.

Journal ArticleDOI
TL;DR: Simulations demonstrate the detection error reduction of the proposed framework compared to conventional detection techniques that are based on channel estimation.
Abstract: This paper considers a multiple-input multiple-output system with low-resolution analog-to-digital converters (ADCs). In this system, we propose a novel communication framework that is inspired by supervised learning. The key idea of the proposed framework is to learn the nonlinear input–output system, formed by the concatenation of a wireless channel and a quantization function used at the ADCs for data detection. In this framework, a conventional channel estimation process is replaced by a system learning process, in which the conditional probability mass functions (PMFs) of the nonlinear system are empirically learned by sending the repetitions of all possible data signals as pilot signals. Then, the subsequent data detection process is performed based on the empirical conditional PMFs obtained during the system learning. To reduce both the training overhead and the detection complexity, we also develop a supervised-learning-aided successive-interference-cancellation method. In this method, a data signal vector is divided into two subvectors with reduced dimensions. Then, these two subvectors are successively detected based on the conditional PMFs that are learned using artificial noise signals and an estimated channel. For the case of 1-bit ADCs, we derive an analytical expression for the vector error rate of the proposed framework under perfect channel knowledge at the receiver. Simulations demonstrate the detection error reduction of the proposed framework compared to conventional detection techniques that are based on channel estimation.

Posted Content
TL;DR: This work designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequence of phoneme distributions, and a production-level speech decoder that outputs sequences of words.
Abstract: This work presents a scalable solution to open-vocabulary visual speech recognition To achieve this, we constructed the largest existing visual speech recognition dataset, consisting of pairs of text and video clips of faces speaking (3,886 hours of video) In tandem, we designed and trained an integrated lipreading system, consisting of a video processing pipeline that maps raw video to stable videos of lips and sequences of phonemes, a scalable deep neural network that maps the lip videos to sequences of phoneme distributions, and a production-level speech decoder that outputs sequences of words The proposed system achieves a word error rate (WER) of 409% as measured on a held-out set In comparison, professional lipreaders achieve either 864% or 929% WER on the same dataset when having access to additional types of contextual information Our approach significantly improves on other lipreading approaches, including variants of LipNet and of Watch, Attend, and Spell (WAS), which are only capable of 898% and 768% WER respectively

Journal ArticleDOI
TL;DR: A five-layer DNN spoofing detection classifier is trained using dynamic acoustic features and a novel, simple scoring method only using human log-likelihoods (HLLs) for spoofed speech detection is proposed, mathematically proving that the new HLL scoring method is more suitable for the spoofing Detection task than the classical LLR scoring method.
Abstract: With the development of speech synthesis technology, automatic speaker verification (ASV) systems have encountered the serious challenge of spoofing attacks. In order to improve the security of ASV systems, many antispoofing countermeasures have been developed. In the front-end domain, much research has been conducted on finding effective features which can distinguish spoofed speech from genuine speech and the published results show that dynamic acoustic features work more effectively than static ones. In the back-end domain, Gaussian mixture model (GMM) and deep neural networks (DNNs) are the two most popular types of classifiers used for spoofing detection. The log-likelihood ratios (LLRs) generated by the difference of human and spoofing log-likelihoods are used as spoofing detection scores. In this paper, we train a five-layer DNN spoofing detection classifier using dynamic acoustic features and propose a novel, simple scoring method only using human log-likelihoods (HLLs) for spoofing detection. We mathematically prove that the new HLL scoring method is more suitable for the spoofing detection task than the classical LLR scoring method, especially when the spoofing speech is very similar to the human speech. We extensively investigate the performance of five different dynamic filter bank-based cepstral features and constant Q cepstral coefficients (CQCC) in conjunction with the DNN-HLL method. The experimental results show that, compared to the GMM-LLR method, the DNN-HLL method is able to significantly improve the spoofing detection accuracy. Compared with the CQCC-based GMM-LLR baseline, the proposed DNN-HLL model reduces the average equal error rate of all attack types to 0.045%, thus exceeding the performance of previously published approaches for the ASVspoof 2015 Challenge task. Fusing the CQCC-based DNN-HLL spoofing detection system with ASV systems, the false acceptance rate on spoofing attacks can be reduced significantly.

Posted Content
TL;DR: A new learning-based method that is the first to predict perceptual image error like human observers, and significantly outperforms existing algorithms, beating the state-of-the-art by almost 3× on the authors' test set in terms of binary error rate, while also generalizing to new kinds of distortions, unlike previous learning- based methods.
Abstract: The ability to estimate the perceptual error between images is an important problem in computer vision with many applications. Although it has been studied extensively, however, no method currently exists that can robustly predict visual differences like humans. Some previous approaches used hand-coded models, but they fail to model the complexity of the human visual system. Others used machine learning to train models on human-labeled datasets, but creating large, high-quality datasets is difficult because people are unable to assign consistent error labels to distorted images. In this paper, we present a new learning-based method that is the first to predict perceptual image error like human observers. Since it is much easier for people to compare two given images and identify the one more similar to a reference than to assign quality scores to each, we propose a new, large-scale dataset labeled with the probability that humans will prefer one image over another. We then train a deep-learning model using a novel, pairwise-learning framework to predict the preference of one distorted image over the other. Our key observation is that our trained network can then be used separately with only one distorted image and a reference to predict its perceptual error, without ever being trained on explicit human perceptual-error labels. The perceptual error estimated by our new metric, PieAPP, is well-correlated with human opinion. Furthermore, it significantly outperforms existing algorithms, beating the state-of-the-art by almost 3x on our test set in terms of binary error rate, while also generalizing to new kinds of distortions, unlike previous learning-based methods.

Journal ArticleDOI
TL;DR: This work proposes a modular structure on the neural network, applying a progressive pretraining regimen, and improving the objective function with transfer learning and a discriminative training criterion, which achieves over 30% relative improvement of word error rate.
Abstract: Unsupervised single-channel overlapped speech recognition is one of the hardest problems in automatic speech recognition (ASR). Permutation invariant training (PIT) is a state of the art model-based approach, which applies a single neural network to solve this single-input, multiple-output modeling problem. We propose to advance the current state of the art by imposing a modular structure on the neural network, applying a progressive pretraining regimen, and improving the objective function with transfer learning and a discriminative training criterion. The modular structure splits the problem into three subtasks: frame-wise interpreting, utterance-level speaker tracing, and speech recognition. The pretraining regimen uses these modules to solve progressively harder tasks. Transfer learning leverages parallel clean speech to improve the training targets for the network. Our discriminative training formulation is a modification of standard formulations that also penalizes competing outputs of the system. Experiments are conducted on the artificial overlapped switchboard and hub5e-swb dataset. The proposed framework achieves over 30% relative improvement of word error rate over both a strong jointly trained system, PIT for ASR, and a separately optimized system, PIT for speech separation with clean speech ASR model. The improvement comes from better model generalization, training efficiency, and the sequence level linguistic knowledge integration.