A
Armand Joulin
Researcher at Facebook
Publications - 136
Citations - 36652
Armand Joulin is an academic researcher from Facebook. The author has contributed to research in topics: Computer science & Word (computer architecture). The author has an hindex of 55, co-authored 125 publications receiving 25130 citations. Previous affiliations of Armand Joulin include Microsoft & École Normale Supérieure.
Papers
More filters
Posted Content
ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron,Piotr Bojanowski,Mathilde Caron,Matthieu Cord,Alaaeldin El-Nouby,Edouard Grave,Armand Joulin,Gabriel Synnaeve,Jakob Verbeek,Hervé Jégou +9 more
TL;DR: ResMLP as mentioned in this paper is an architecture built entirely upon multi-layer perceptrons for image classification, which achieves surprisingly good accuracy/complexity trade-offs on ImageNet by using heavy data-augmentation and optionally distillation.
Proceedings ArticleDOI
Masked Siamese Networks for Label-Efficient Learning
Mahmoud Assran,Mathilde Caron,Ishan Misra,Piotr Bojanowski,Florian Bordes,P. Vincent,Armand Joulin,Michael G. Rabbat,Nicolas Ballas +8 more
TL;DR: This work proposes Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations that improves the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification.
Proceedings Article
Improving Neural Language Models with a Continuous Cache.
TL;DR: A simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation, which is very efficient and scales to very large memory sizes.
Proceedings ArticleDOI
Learning Visual N-Grams from Web Data
TL;DR: This paper develops visual n-gram models that can predict arbitrary phrases that are relevant to the content of an image, and demonstrates the merits of the models in phrase prediction, phrase-based image retrieval, relating images and captions, and zero-shot transfer.
Posted Content
Augmenting Self-attention with Persistent Memory
TL;DR: A new model that solely consists of attention layers is proposed that augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer.