scispace - formally typeset
Open AccessProceedings Article

Framewise phoneme classification with bidirectional LSTM and other neural network architectures

Alex Graves, +1 more
- Vol. 18, pp 602-610
Reads0
Chats0
TLDR
In this article, a modified, full gradient version of the LSTM learning algorithm was used for framewise phoneme classification, using the TIMIT database, and the results support the view that contextual information is crucial to speech processing, and suggest that bidirectional networks outperform unidirectional ones.
Abstract
In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it'.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Singing voice detection with deep recurrent neural networks

TL;DR: A new method for singing voice detection based on a Bidirectional Long Short-Term Memory (BLSTM) Recurrent Neural Network (RNN) that is able to take a past and future temporal context into account to decide on the presence/absence of singing voice.
Journal ArticleDOI

Portfolio optimization with return prediction using deep learning and machine learning

TL;DR: Experimental results show that MV and omega models with RF return prediction, i.e., RF+MVF and RF+OF, outperform the other models and omega model with SVR prediction (SVR+OF) performs the best among OF models, and investors are advised to build MVF withRF return prediction for daily trading investment.
Journal ArticleDOI

A Systematic Review of Deep Learning Approaches to Educational Data Mining

TL;DR: The main goals of this study are to identify the EDM tasks that have benefited from Deep Learning and those that are pending to be explored, to describe the main datasets used, and to provide an overview of the key concepts, main architectures, and configurations of Deep learning and its applications to EDM.
Posted Content

Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets.

TL;DR: It is shown that an online merging of locally available information during a computation with suitable top-down learning signals in real-time provides highly capable approximations to back-propagation through time (BPTT).
Journal ArticleDOI

Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks

TL;DR: QuAIL is presented, the first RC dataset to combine text-based, world knowledge and unanswerable questions, and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given QA system.
References
More filters
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Book

Neural networks for pattern recognition

TL;DR: This is the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition, and is designed as a text, with over 100 exercises, to benefit anyone involved in the fields of neural computation and pattern recognition.
Journal ArticleDOI

Bidirectional recurrent neural networks

TL;DR: It is shown how the proposed bidirectional structure can be easily modified to allow efficient estimation of the conditional posterior probability of complete symbol sequences without making any explicit assumption about the shape of the distribution.

Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies

TL;DR: D3EGF(FIH)J KMLONPEGQSRPETN UCV.
Related Papers (5)