scispace - formally typeset
Open AccessPosted Content

A Survey on Vision Transformer

TLDR
Transformer as mentioned in this paper is a type of deep neural network mainly based on the self-attention mechanism, which has been applied to the field of natural language processing, and has received more and more attention from the computer vision community.
Abstract
Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and less need for vision-specific inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these vision transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also include efficient transformer methods for pushing transformer into real device-based applications. Furthermore, we also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Toward the end of this paper, we discuss the challenges and provide several further research directions for vision transformers.

read more

Citations
More filters
Posted Content

Transformer in Transformer

TL;DR: Transformer iN Transformer (TNT) as discussed by the authors is a new kind of neural architecture which encodes the input data as powerful features via the attention mechanism, where the visual transformers first divide the input images into several local patches and then calculate both representations and their relationship.
Posted Content

No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency.

TL;DR: In this paper, a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers is proposed to extract both local and non-local features from the input image.
Journal ArticleDOI

Toward Fine-Grained Sketch-Based 3D Shape Retrieval

TL;DR: In this paper, a cross-modal view attention module is proposed to compute the optimal combination of 2D projections of a 3D shape given a query sketch, which can be used to retrieve a specific chair from a gallery of all chairs.
Posted Content

A Survey of Transformers.

TL;DR: A comprehensive review of various X-formers can be found in this article, where the vanilla Transformer is briefly introduced and then a new taxonomy of X-forms is proposed.
Posted Content

Focal Self-attention for Local-Global Interactions in Vision Transformers

TL;DR: Focal self-attention as discussed by the authors proposes a new variant of Vision Transformer models, called Focal Transformer, which can capture both short and long-range visual dependencies efficiently and effectively.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Related Papers (5)