scispace - formally typeset
O

Oriol Vinyals

Researcher at Google

Publications -  218
Citations -  121048

Oriol Vinyals is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Reinforcement learning. The author has an hindex of 84, co-authored 200 publications receiving 82365 citations. Previous affiliations of Oriol Vinyals include University of California, San Diego & University of California, Berkeley.

Papers
More filters
Posted Content

An Online Sequence-to-Sequence Model Using Partial Conditioning

TL;DR: In this article, an encoder recurrent neural network (RNN) is used to compute features at the same frame rate as the input, and a transducer RNN that operates over blocks of input steps.
Patent

Using Hierarchical Representations for Neural Network Architecture Searching

TL;DR: In this paper, a computer-implemented method for automatically determining a neural network architecture is presented, in which the modification is performed by selecting a level, selecting two nodes at that level, and modifying, removing or adding an edge between those nodes according to operations associated with lower levels of the hierarchy.
Posted Content

WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset

TL;DR: The WikiGraphs dataset as mentioned in this paper pairs Wikipedia articles with a knowledge graph to facilitate the research in conditional text generation, graph generation and graph representation learning, which is a dataset of Wikipedia articles paired with knowledge graphs.

HiP: Hierarchical Perceiver

TL;DR: In this article , Hierarchical Perceiver (HiP) is proposed to learn dense low-dimensional positional embeddings for high-resolution images and videos, which can handle up to a few hundred thousand inputs.
Posted Content

Why Size Matters: Feature Coding as Nystrom Sampling

TL;DR: A novel view of feature extraction pipelines that rely on a coding step followed by a linear classifier based on kernel methods and Nystrom sampling is proposed, which may help explaining the positive effect of the codebook size and justifying the need to stack more layers, as flat models empirically saturate as the authors add more complexity.