scispace - formally typeset
T

Tara N. Sainath

Researcher at Google

Publications -  317
Citations -  31002

Tara N. Sainath is an academic researcher from Google. The author has contributed to research in topics: Computer science & Word error rate. The author has an hindex of 61, co-authored 274 publications receiving 25183 citations. Previous affiliations of Tara N. Sainath include IBM & Nuance Communications.

Papers
More filters
Journal ArticleDOI

Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups

TL;DR: This article provides an overview of progress and represents the shared views of four research groups that have had recent successes in using DNNs for acoustic modeling in speech recognition.
Journal Article

Deep Neural Networks for Acoustic Modeling in Speech Recognition

TL;DR: This paper provides an overview of this progress and repres nts the shared views of four research groups who have had recent successes in using deep neural networks for a coustic modeling in speech recognition.
Proceedings ArticleDOI

Convolutional, Long Short-Term Memory, fully connected Deep Neural Networks

TL;DR: This paper takes advantage of the complementarity of CNNs, LSTMs and DNNs by combining them into one unified architecture, and finds that the CLDNN provides a 4-6% relative improvement in WER over an LSTM, the strongest of the three individual models.
Proceedings ArticleDOI

Improving deep neural networks for LVCSR using rectified linear units and dropout

TL;DR: Modelling deep neural networks with rectified linear unit (ReLU) non-linearities with minimal human hyper-parameter tuning on a 50-hour English Broadcast News task shows an 4.2% relative improvement over a DNN trained with sigmoid units, and a 14.4% relative improved over a strong GMM/HMM system.
Proceedings ArticleDOI

Deep convolutional neural networks for LVCSR

TL;DR: This paper determines the appropriate architecture to make CNNs effective compared to DNNs for LVCSR tasks, and explores the behavior of neural network features extracted from CNNs on a variety of LVCSS tasks, comparing CNNs toDNNs and GMMs.