scispace - formally typeset
Q

Quoc V. Le

Researcher at Google

Publications -  229
Citations -  127721

Quoc V. Le is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Language model. The author has an hindex of 103, co-authored 217 publications receiving 101217 citations. Previous affiliations of Quoc V. Le include Northwestern University & Tel Aviv University.

Papers
More filters
Posted Content

SpeechStew: Simply Mix All Available Speech Recognition Data to Train One Large Neural Network

TL;DR: SpeechStew as mentioned in this paper is a speech recognition model that is trained on a combination of various publicly available speech recognition datasets: AMI, Broadcast News, Common Voice, LibriSpeech, Switchboard/Fisher, Tedlium, and Wall Street Journal.
Proceedings Article

Faster Discovery of Neural Architectures by Searching for Paths in a Large Model

TL;DR: Efficient Neural Architecture Search is a faster and less expensive approach to automated model design than previous methods, and is more than 10x faster and 100x less resource-demanding than NAS.
Proceedings Article

PyGlove: Symbolic Programming for Automated Machine Learning

TL;DR: PyGlove is introduced, a new Python library that implements a new way of programming AutoML based on symbolic programming that makes it easy to change the search space and search algorithm, as well as to add search capabilities to existing code and implement complex search flows.
Posted Content

Using Web Co-occurrence Statistics for Improving Image Categorization

TL;DR: This work considers a simple approach to encapsulate common sense knowledge using co-occurrence statistics from web documents, and observes significant improvements in recognition and localization rates for both ImageNet Detection 2012 and Sun 2012 datasets.
Proceedings Article

Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision

TL;DR: In this paper, a dual-encoder architecture is proposed to align visual and language representations of the image and text pairs using a contrastive loss, which achieves state-of-the-art results on the Conceptual Captions dataset.