O
Oriol Vinyals
Researcher at Google
Publications - 218
Citations - 121048
Oriol Vinyals is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Reinforcement learning. The author has an hindex of 84, co-authored 200 publications receiving 82365 citations. Previous affiliations of Oriol Vinyals include University of California, San Diego & University of California, Berkeley.
Papers
More filters
Proceedings Article
Hierarchical Representations for Efficient Architecture Search
TL;DR: In this article, a hierarchical genetic representation scheme was used to discover architectures for image classification, achieving a top-1 accuracy of 3.6% on CIFAR-10 and 20.3% on ImageNet.
Proceedings Article
Meta-Learning with Latent Embedding Optimization
Andrei Rusu,Dushyant Rao,Jakub Sygnowski,Oriol Vinyals,Razvan Pascanu,Simon Osindero,Raia Hadsell +6 more
TL;DR: In this paper, a data-dependent latent generative representation of model parameters is learned and a gradient-based meta-learning is performed in a low-dimensional latent space for few-shot learning.
Posted Content
Universal Transformers
TL;DR: The authors proposed the Universal Transformer model, which employs a self-attention mechanism in every recursive step to combine information from different parts of a sequence, and further employs an adaptive computation time (ACT) mechanism to dynamically adjust the number of times the representation of each position in a sequence is revised.
Posted Content
Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks
TL;DR: The authors proposed a curriculum learning strategy to gently change the training process from a fully guided scheme using the true previous token, towards a less guided scheme which mostly uses the generated token instead.
Proceedings Article
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
TL;DR: The ANIL (Almost No Inner Loop) algorithm is proposed, a simplification of MAML where the inner loop is removed for all but the (task-specific) head of a MAMl-trained network, and performance on the test tasks is entirely determined by the quality of the learned features, and one can remove even the head of the network (the NIL algorithm).