Open AccessProceedings Article
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy,Lucas Beyer,Alexander Kolesnikov,Dirk Weissenborn,Xiaohua Zhai,Thomas Unterthiner,Mostafa Dehghani,Matthias Minderer,Georg Heigold,Sylvain Gelly,Jakob Uszkoreit,Neil Houlsby +11 more
Reads0
Chats0
TLDR
The Vision Transformer (ViT) as discussed by the authors uses a pure transformer applied directly to sequences of image patches to perform very well on image classification tasks, achieving state-of-the-art results on ImageNet, CIFAR-100, VTAB, etc.Abstract:
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.read more
Citations
More filters
Posted Content
Scaling Vision with Sparse Mixture of Experts.
Carlos Riquelme,Joan Puigcerver,Basil Mustafa,Maxim Neumann,Rodolphe Jenatton,André Susano Pinto,Daniel Keysers,Neil Houlsby +7 more
TL;DR: The V-MoE as discussed by the authors is a sparse version of the Vision Transformer that matches the performance of state-of-the-art networks, while requiring as little as half of the compute at inference time.
Posted Content
Perceiver IO: A General Architecture for Structured Inputs & Outputs
Andrew Jaegle,Sebastian Borgeaud,Jean-Baptiste Alayrac,Carl Doersch,Catalin Ionescu,David Ding,Skanda Koppula,Daniel Zoran,Andrew Brock,Evan Shelhamer,Olivier J. Hénaff,Matthew Botvinick,Andrew Zisserman,Oriol Vinyals,Joao Carreira +14 more
TL;DR: Perceiver IO as mentioned in this paper proposes to learn to flexibly query the model's latent space to produce outputs of arbitrary size and semantics, and achieves state-of-the-art results on tasks with highly structured output spaces.
Posted Content
Transformer with Peak Suppression and Knowledge Guidance for Fine-grained Image Recognition.
TL;DR: Zhang et al. as mentioned in this paper proposed a transformer architecture with the peak suppression module and knowledge guidance module, which respects the diversification of discriminative features in a single image and the aggregation of discriminant clues among multiple images.
Journal ArticleDOI
Line as a Visual Sentence: Context-Aware Line Descriptor for Visual Localization
Sungho Yoon,Ayoung Kim +1 more
TL;DR: In this article, the authors view a line segment as a sentence that contains points (words) and propose Line-Transformers to deal with variable lines, which performs excellently on variable line length.
Book ChapterDOI
Combining Transformer Generators with Convolutional Discriminators
Ricard Durall,Ricard Durall,Stanislav Frolov,Stanislav Frolov,Jörn Hees,Federico Raue,Franz-Josef Pfreundt,Andreas Dengel,Andreas Dengel,Janis Keuper +9 more
TL;DR: TransGAN as discussed by the authors combines a transformer-based generator and a convolutional discriminator to achieve better results compared to traditional GANs, which requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Adam: A Method for Stochastic Optimization
Diederik P. Kingma,Jimmy Ba +1 more
TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.