scispace - formally typeset
Open AccessProceedings Article

Deformable DETR: Deformable Transformers for End-to-End Object Detection

Reads0
Chats0
TLDR
Deformable DETR as discussed by the authors proposes to only attend to a small set of key sampling points around a reference, which can achieve better performance than DETR with 10× less training epochs.
Abstract
DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10× less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code shall be released.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

HandsFormer: Keypoint Transformer for Monocular 3D Pose Estimation ofHands and Object in Interaction

TL;DR: In this paper, the authors estimate the 3D pose of two hands in close interaction from a single color image by extracting a set of potential 2D locations for the joints of both hands as extrema of a heatmap.
Posted Content

TubeR: Tube-Transformer for Action Detection.

TL;DR: In this paper, the authors propose a transformer-based network for end-to-end action detection, with an encoder and decoder optimized for modeling action tubes with variable lengths and aspect ratios.
Posted Content

StyTr^2: Unbiased Image Style Transfer with Transformers

TL;DR: Zhang et al. as discussed by the authors proposed a transformer-based approach for image style transfer, which consists of two different transformer encoders to generate domain-specific sequences for content and style, respectively.
Posted Content

Next Generation Multitarget Trackers: Random Finite Set Methods vs Transformer-based Deep Learning.

TL;DR: In this article, a model-free deep learning method based on the Transformer architecture was proposed for multitarget tracking, which can learn the optimal filter from data, but to the best of our knowledge was not compared to current state-of-the-art Bayesian filters, specially not in contexts where accurate models are available.
Posted Content

Conditional DETR for Fast Training Convergence

TL;DR: In this paper, a conditional cross-attention mechanism for fast DETR training is proposed, which is motivated by that the crossattention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embedding and thus the training difficulty.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Book ChapterDOI

Microsoft COCO: Common Objects in Context

TL;DR: A new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding by gathering images of complex everyday scenes containing common objects in their natural context.
Related Papers (5)