O
Oriol Vinyals
Researcher at Google
Publications - 218
Citations - 121048
Oriol Vinyals is an academic researcher from Google. The author has contributed to research in topics: Artificial neural network & Reinforcement learning. The author has an hindex of 84, co-authored 200 publications receiving 82365 citations. Previous affiliations of Oriol Vinyals include University of California, San Diego & University of California, Berkeley.
Papers
More filters
Proceedings ArticleDOI
Learning speaker, addressee and overlap detection models from multimodal streams
TL;DR: This paper explores discriminative learning techniques for making accurate inferences on the problems of speaker, addressee and overlap detection in multiparty human-computer dialog with a novel extension to traditional decision trees which allows them to incorporate and model temporal signals.
Posted Content
AlignNet: Unsupervised Entity Alignment
Antonia Creswell,Kyriacos Nikiforou,Oriol Vinyals,Andre Saraiva,Rishabh Kabra,Loic Matthey,Christopher P. Burgess,Malcolm Reynolds,Richard Tanburn,Marta Garnelo,Murray Shanahan +10 more
TL;DR: This paper takes steps towards solving the alignment problem, presenting the AlignNet, an unsupervised alignment module that is able to segment scenes into component objects without supervision.
Posted Content
Preventing Posterior Collapse with delta-VAEs
TL;DR: The authors constrain the variational family for the posterior to have a minimum distance to the prior to ensure that the latent variables preserve and encode useful information, which is similar to our approach.
Proceedings ArticleDOI
A Hardware-Independent Fast Logarithm Approximation with Adjustable Accuracy
Oriol Vinyals,Gerald Friedland +1 more
TL;DR: This article presents the realization of a novel platform-independent, fast C-language implementation of the logarithm function to take advantage of the large amount of cache available in current processors.
Posted Content
Multimodal Few-Shot Learning with Frozen Language Models
TL;DR: The authors used aligned image and caption data to train a vision encoder to represent each image as a sequence of continuous embeddings, such that a pre-trained, frozen language model prompted with this prefix generates the appropriate caption.