Open AccessPosted Content
A Survey on Visual Transformer
Kai Han,Yunhe Wang,Hanting Chen,Xinghao Chen,Jianyuan Guo,Zhenhua Liu,Yehui Tang,An Xiao,Chunjing Xu,Yixing Xu,Zhaohui Yang,Yiman Zhang,Dacheng Tao +12 more
Reads0
Chats0
TLDR
In this paper, a review of transformer-based models for computer vision tasks is presented, including the backbone network, high/mid-level vision, low-level image processing, and video processing.Abstract:
Transformer, first applied to the field of natural language processing, is a type of deep neural network mainly based on the self-attention mechanism. Thanks to its strong representation capabilities, researchers are looking at ways to apply transformer to computer vision tasks. In a variety of visual benchmarks, transformer-based models perform similar to or better than other types of networks such as convolutional and recurrent networks. Given its high performance and no need for human-defined inductive bias, transformer is receiving more and more attention from the computer vision community. In this paper, we review these visual transformer models by categorizing them in different tasks and analyzing their advantages and disadvantages. The main categories we explore include the backbone network, high/mid-level vision, low-level vision, and video processing. We also take a brief look at the self-attention mechanism in computer vision, as it is the base component in transformer. Furthermore, we include efficient transformer methods for pushing transformer into real device-based applications. Toward the end of this paper, we discuss the challenges and provide several further research directions for visual transformers.read more
Citations
More filters
Posted Content
Attention Mechanisms in Computer Vision: A Survey.
Meng-Hao Guo,Tian-Xing Xu,Jiangjiang Liu,Zheng-Ning Liu,Peng-Tao Jiang,Tai-Jiang Mu,Song-Hai Zhang,Ralph R. Martin,Ming-Ming Cheng,Shi-Min Hu +9 more
TL;DR: A comprehensive review of attention mechanisms in computer vision can be found in this article, which categorizes them according to approach, such as channel attention, spatial attention, temporal attention and branch attention.
Journal ArticleDOI
Deep learning for neuroimaging-based diagnosis and rehabilitation of Autism Spectrum Disorder: A review.
Marjane Khodatars,Afshin Shoeibi,Afshin Shoeibi,Delaram Sadeghi,Navid Ghaasemi,Mahboobeh Jafari,Parisa Moridian,Ali Khadem,Roohallah Alizadehsani,Assef Zare,Yinan Kong,Abbas Khosravi,Saeid Nahavandi,Sadiq Hussain,U. Rajendra Acharya,Michael Berk +15 more
TL;DR: In this article, the authors used deep learning (DL) for the diagnosis of autism spectrum disorder (ASD) followed by effective rehabilitation, which can aid physicians to apply automatic diagnosis and rehabilitation procedures.
Posted Content
TransReID: Transformer-based Object Re-Identification
TL;DR: TransReID as mentioned in this paper proposes a pure transformer-based object ReID framework, which first encodes an image as a sequence of patches and builds a transformerbased strong baseline with a few critical improvements, which achieves competitive results on several ReID benchmarks.
Journal ArticleDOI
Ms RED: A novel multi-scale residual encoding and decoding network for skin lesion segmentation
TL;DR: Li et al. as mentioned in this paper proposed a multi-scale residual encoding and decoding network (Ms RED) for skin lesion segmentation, which is able to accurately and reliably segment a variety of lesions with efficiency.
Posted Content
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency.
TL;DR: In this paper, a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers is proposed to extract both local and non-local features from the input image.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI
Fully convolutional networks for semantic segmentation
TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Journal ArticleDOI
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
TL;DR: This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.