scispace - formally typeset
L

Larry Heck

Researcher at Samsung

Publications -  173
Citations -  9256

Larry Heck is an academic researcher from Samsung. The author has contributed to research in topics: Speaker recognition & Natural language. The author has an hindex of 46, co-authored 173 publications receiving 8091 citations. Previous affiliations of Larry Heck include Georgia Institute of Technology & Nuance Communications.

Papers
More filters
Proceedings ArticleDOI

Learning deep structured semantic models for web search using clickthrough data

TL;DR: A series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them are developed.
Journal ArticleDOI

Using recurrent neural networks for slot filling in spoken language understanding

TL;DR: This paper implemented and compared several important RNN architectures, including Elman, Jordan, and hybrid variants, and implemented these networks with the publicly available Theano neural network toolkit and completed experiments on the well-known airline travel information system (ATIS) benchmark.
Proceedings ArticleDOI

What is left to be understood in ATIS

TL;DR: It is concluded that even with such low error rates, ATIS test set still includes many unseen example categories and sequences, hence requires more data, and new annotated larger data sets from more complex tasks with realistic utterances can avoid over-tuning in terms of modeling and feature design.

MSR Identity Toolbox v1.0: A MATLAB Toolbox for Speaker Recognition Research

TL;DR: The MSR Identity Toolbox is released, which contains a collection of MATLAB tools and routines that can be used for research and development in speaker recognition, and provides many of the functionalities available in other open-source speaker recognition toolkits.
Posted Content

Contextual LSTM (CLSTM) models for Large scale NLP tasks.

TL;DR: Results from experiments indicate that using both words and topics as features improves performance of the CLSTM models over baseline L STM models for these tasks, demonstrating the significant benefit of using context appropriately in natural language (NL) tasks.