scispace - formally typeset
Open AccessPosted Content

XCiT: Cross-Covariance Image Transformers.

Reads0
Chats0
TLDR
Cross-covariance image transformer (XCiT) as mentioned in this paper proposes a cross-cavariance attention (XCA) operation that operates across feature channels rather than tokens, where the interactions are based on the crosscovarisance matrix between keys and queries.
Abstract
Following their success in natural language processing, transformers have recently shown much promise for computer vision. The self-attention operation underlying transformers yields global interactions between all tokens ,i.e. words or image patches, and enables flexible modelling of image data beyond the local interactions of convolutions. This flexibility, however, comes with a quadratic complexity in time and memory, hindering application to long sequences and high-resolution images. We propose a "transposed" version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries. The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images. Our cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. We validate the effectiveness and generality of XCiT by reporting excellent results on multiple vision benchmarks, including image classification and self-supervised feature learning on ImageNet-1k, object detection and instance segmentation on COCO, and semantic segmentation on ADE20k.

read more

Citations
More filters
Posted Content

Transformers in Vision: A Survey

TL;DR: Transformer networks as mentioned in this paper enable modeling long dependencies between input sequence elements and support parallel processing of sequence as compared to recurrent networks e.g., Long short-term memory (LSTM).
Posted Content

Contextual Transformer Networks for Visual Recognition

TL;DR: In this article, a Contextual Transformer (CoT) block is proposed to exploit the contextual information among input keys to guide the learning of dynamic attention matrix and thus strengthen the capacity of visual representation.
Posted Content

A Survey on Vision Transformer

TL;DR: Transformer as mentioned in this paper is a type of deep neural network mainly based on the self-attention mechanism, which has been applied to the field of natural language processing, and has received more and more attention from the computer vision community.
Posted Content

Scaled ReLU Matters for Training Vision Transformers

TL;DR: In this article, a scaled ReLU operation in the convolutional stem of a vision transformer was shown to not only improve training stabilization, but also increase the diversity of patch tokens, thus boosting peak performance.
Journal ArticleDOI

Scaled ReLU Matters for Training Vision Transformers

TL;DR: In this paper , a scaled ReLU operation in the convolutional stem (conv-stem) was shown to not only improve training stabilization, but also increase the diversity of patch tokens.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Proceedings ArticleDOI

Feature Pyramid Networks for Object Detection

TL;DR: This paper exploits the inherent multi-scale, pyramidal hierarchy of deep convolutional networks to construct feature pyramids with marginal extra cost and achieves state-of-the-art single-model results on the COCO detection benchmark without bells and whistles.
Related Papers (5)