Q
Quoc V. Le
Researcher at Google
Publications - 229
Citations - 127721
Quoc V. Le is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Language model. The author has an hindex of 103, co-authored 217 publications receiving 101217 citations. Previous affiliations of Quoc V. Le include Northwestern University & Tel Aviv University.
Papers
More filters
Proceedings Article
XLNet: Generalized Autoregressive Pretraining for Language Understanding
TL;DR: The authors proposes XLNet, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes the limitations of BERT The authors.
Proceedings Article
Large Scale Distributed Deep Networks
Jeffrey Dean,Greg S. Corrado,Rajat Monga,Kai Chen,Matthieu Devin,Mark Z. Mao,Marc'Aurelio Ranzato,Andrew W. Senior,Paul A. Tucker,Ke Yang,Quoc V. Le,Andrew Y. Ng +11 more
TL;DR: This paper considers the problem of training a deep network with billions of parameters using tens of thousands of CPU cores and develops two algorithms for large-scale distributed training, Downpour SGD and Sandblaster L-BFGS, which increase the scale and speed of deep network training.
Posted Content
Distributed Representations of Sentences and Documents
Quoc V. Le,Tomas Mikolov +1 more
TL;DR: The authors proposed paragraph vector, an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents, and achieved new state-of-the-art results on several text classification and sentiment analysis tasks.
Posted Content
Neural Architecture Search with Reinforcement Learning
Barret Zoph,Quoc V. Le +1 more
TL;DR: This paper uses a recurrent network to generate the model descriptions of neural networks and trains this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set.
Posted Content
XLNet: Generalized Autoregressive Pretraining for Language Understanding
TL;DR: XLNet is proposed, a generalized autoregressive pretraining method that enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and overcomes the limitations of BERT thanks to its autore progressive formulation.