Open AccessPosted Content
Squeeze-and-Excitation Networks
Reads0
Chats0
TLDR
Squeeze-and-excitation (SE) as mentioned in this paper adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels, which can be stacked together to form SENet architectures.Abstract:
The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.read more
Citations
More filters
Proceedings ArticleDOI
Enhancing Music Features by Knowledge Transfer from User-item Log Data
TL;DR: A novel method that exploits music listening log data for general-purpose music feature extraction by extending intra-domain knowledge distillation to cross-domain: i.e., by transferring knowledge obtained from the user-item domain to the music content domain.
Journal ArticleDOI
A Comprehensive Survey with Quantitative Comparison of Image Analysis Methods for Microorganism Biovolume Measurements
Jiawei Zhang,Chen Li,Mamunur Rahaman,Yudong Yao,Pingli Ma,Jinghua Zhang,Xin Zhao,Tao Jiang,Marcin Grzegorzek +8 more
TL;DR: More than 62 articles are reviewed in this paper , and the articles are grouped by digital image analysis methods with time, with high research significance and application value, which can be referred to as microbial researchers to comprehensively understand microorganism biovolume measurements using digital image analyses and potential applications.
Journal ArticleDOI
Intelligent detection and applied research on diabetic retinopathy based on the residual attention network
TL;DR: The classification and diagnosis of DR may be improved by adopting the proposed method RAN, which mainly improves the results of commonly used CNN methods on the same dataset.
Journal ArticleDOI
Lw-TISNet: Light-Weight Convolutional Neural Network Incorporating Attention Mechanism and Multiple Supervision Strategy for Tongue Image Segmentation
Journal ArticleDOI
Efficient dual attention SlowFast networks for video action recognition
TL;DR: Widafeng et al. as mentioned in this paper proposed a cross-modality dual attention fusion module named CMDA to explicitly exchange spatial-temporal information between two pathways in two-stream SlowFast networks.
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
ImageNet Classification with Deep Convolutional Neural Networks
TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI
Long short-term memory
TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.