scispace - formally typeset
Q

Quoc V. Le

Researcher at Google

Publications -  229
Citations -  127721

Quoc V. Le is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Language model. The author has an hindex of 103, co-authored 217 publications receiving 101217 citations. Previous affiliations of Quoc V. Le include Northwestern University & Tel Aviv University.

Papers
More filters
Posted Content

Learning a Natural Language Interface with Neural Programmer

TL;DR: This paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset, and enhances the objective function of Neural Programmer, a neural network with built-in discrete operations, and applies it on WikiTableQuestions, a natural language question-answering dataset.
Posted Content

EfficientNetV2: Smaller Models and Faster Training

TL;DR: EfficientNetV2 as mentioned in this paper uses a combination of training-aware neural architecture search and scaling, to jointly optimize training speed and parameter efficiency, achieving state-of-the-art performance.
Proceedings ArticleDOI

Learning to Skim Text

TL;DR: The proposed model is a modified LSTM with jumping, a recurrent network that learns how far to jump after reading a few words of the input text, which is up to 6 times faster than the standard sequential L STM, while maintaining the same or even better accuracy.
Posted Content

Learning to Skim Text

TL;DR: The authors proposed a modified LSTM with jumping, which is up to 6 times faster than the standard sequential LSTMs while maintaining the same or even better accuracy than the traditional sequential ones.
Proceedings ArticleDOI

DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection

TL;DR: This paper proposes two novel techniques: InverseAug that inverses geometric-related augmentations, e.g., rotation, to enable accurate geometric alignment between lidar points and image pixels, and LearnableAlign that leverages cross-attention to dynamically capture the correlations between image and lidar features during fusion.