scispace - formally typeset
Open AccessProceedings Article

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

TLDR
The Vision Transformer (ViT) as discussed by the authors uses a pure transformer applied directly to sequences of image patches to perform very well on image classification tasks, achieving state-of-the-art results on ImageNet, CIFAR-100, VTAB, etc.
Abstract
While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

An Empirical Study of Training Self-Supervised Vision Transformers

TL;DR: This work investigates the effects of several fundamental components for training self-supervised ViT, and reveals that these results are indeed partial failure, and they can be improved when training is made more stable.
Posted Content

Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions

TL;DR: Huang et al. as discussed by the authors proposed Pyramid Vision Transformer (PVT), which is a simple backbone network useful for many dense prediction tasks without convolutions, and achieved state-of-the-art performance on the COCO dataset.
Posted Content

Natural Adversarial Examples

TL;DR: This work introduces two challenging datasets that reliably cause machine learning model performance to substantially degrade and curates an adversarial out-of-distribution detection dataset called IMAGENET-O, which is the first out- of-dist distribution detection dataset created for ImageNet models.
Posted Content

An Attentive Survey of Attention Models

TL;DR: A taxonomy that groups existing techniques into coherent categories in attention models is proposed, and how attention has been used to improve the interpretability of neural networks is described.
Posted Content

CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification

TL;DR: Zhang et al. as mentioned in this paper proposed a dual-branch transformer to combine image patches (i.e., tokens in a transformer) of different sizes to produce stronger image features, which achieved promising results on image classification compared to convolutional neural networks.
References
More filters
Proceedings Article

Fixing the train-test resolution discrepancy

TL;DR: In this article, the authors proposed a simple strategy to optimize the classifier performance, that employs different train and test resolutions, and achieved state-of-the-art performance on ImageNet.
Proceedings Article

Scaling Autoregressive Video Models

TL;DR: It is shown that conceptually simple autoregressive video generation models based on a three-dimensional self-attention mechanism achieve competitive results across multiple metrics on popular benchmark datasets, for which they produce continuations of high fidelity and realism.
Posted Content

Fixing the train-test resolution discrepancy: FixEfficientNet

TL;DR: This strategy is advantageously combined with recent training recipes from the literature and significantly outperforms the initial architecture with the same number of parameters, and establishes the new state of the art for ImageNet with a single crop.
Proceedings ArticleDOI

Self-Supervised Learning of Video-Induced Visual Invariances

TL;DR: In this article, the authors propose a self-supervised learning framework for transferable visual representations based on Video-Induced Visual Invariances (VIVI), which considers the implicit hierarchy present in the videos and make use of frame-level invariances, shot/clip-level invariance, and video-level semantic relationships (semantic relationships of scenes across shots/clips).
Related Papers (5)