Open AccessPosted Content
Squeeze-and-Excitation Networks
Reads0
Chats0
TLDR
Squeeze-and-excitation (SE) as mentioned in this paper adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels, which can be stacked together to form SENet architectures.Abstract:
The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.read more
Citations
More filters
Journal ArticleDOI
Hyperspectral and Multispectral Classification for Coastal Wetland Using Depthwise Feature Interaction Network
TL;DR: In this article , a depthwise cross attention module is designed to extract self-correlation and cross correlation from multisource feature pairs for wetland classification, and the proposed DFINet is optimized by coordinating consistency loss, discrimination loss, and classification loss.
Posted Content
Unsupervised Single Image Deraining with Self-supervised Constraints
TL;DR: An Unsupervised Deraining Generative Adversarial Network (UD-GAN) is proposed to tackle single image deraining problems by introducing self-supervised constraints from the intrinsic statistics of unpaired rainy and clean images.
Journal ArticleDOI
Wireless Image Transmission Using Deep Source Channel Coding With Attention Modules
TL;DR: In this paper , the authors proposed an Attention DL based joint source channel coding (ADJSCC) that can successfully operate with different SNR levels during transmission, which is inspired by the resource assignment strategy in traditional JSCC.
Journal ArticleDOI
SA-FPN: An effective feature pyramid network for crowded human detection
Xin Zhou,Long Zhang +1 more
TL;DR: A feature pyramid structure with a refined hierarchical-split block, referred to as Scale-FPN, which can better handle the challenging problem of scale variation across object instances and improves the state-of-the-art result of CrowdDet from 41.4% to 39.9% MR-2.
Posted Content
GTA: Global Temporal Attention for Video Action Understanding
TL;DR: This paper introduces Global Temporal Attention (GTA), which performs global temporal attention on top of spatial attention in a decoupled manner, and randomly initializes a global attention matrix intended to learn stable temporal structures to generalize across different samples.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.