scispace - formally typeset
P

Pierre Sermanet

Researcher at Google

Publications -  66
Citations -  53384

Pierre Sermanet is an academic researcher from Google. The author has contributed to research in topics: Feature learning & Computer science. The author has an hindex of 29, co-authored 56 publications receiving 40360 citations. Previous affiliations of Pierre Sermanet include New York University & Courant Institute of Mathematical Sciences.

Papers
More filters
Proceedings Article

Wasserstein Dependency Measure for Representation Learning

TL;DR: It is empirically demonstrated that mutual information-based representation learning approaches do fail to learn complete representations on a number of designed and real-world tasks, and a practical approximation to this theoretically motivated solution, constructed using Lipschitz constraint techniques from the GAN literature, achieves substantially improved results on tasks where incomplete representations are a major challenge.
Proceedings ArticleDOI

Learning Actionable Representations from Visual Observations

TL;DR: This work shows that the representations learned by agents observing themselves take random actions, or other agents perform tasks successfully, can enable the learning of continuous control policies using algorithms like Proximal Policy Optimization using only the learned embeddings as input.
Posted Content

Temporal Cycle-Consistency Learning

TL;DR: In this article, a self-supervised representation learning method based on the task of temporal alignment between videos is introduced, which can be used to align videos by simply matching frames using the nearest neighbors in the learned embedding space.
Posted Content

Grounding Language in Play.

TL;DR: A simple and scalable way to condition policies on human language instead of language pairing is presented, and a simple technique that transfers knowledge from large unlabeled text corpora to robotic learning is introduced that significantly improves downstream robotic manipulation.
Proceedings Article

Attention for fine-grained categorization

TL;DR: The authors used an RNN of the same structure but substitute a more powerful visual network and perform large-scale pre-training of the visual network outside of the attention RNN, which is able to discriminate fine-grained dog breeds moderately well even when given only an initial low-resolution context image and narrow, inexpensive glimpses at faces and fur patterns.