scispace - formally typeset
Open AccessPosted Content

Squeeze-and-Excitation Networks

Reads0
Chats0
TLDR
Squeeze-and-excitation (SE) as mentioned in this paper adaptively recalibrates channel-wise feature responses by explicitly modeling interdependencies between channels, which can be stacked together to form SENet architectures.
Abstract
The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we focus instead on the channel relationship and propose a novel architectural unit, which we term the "Squeeze-and-Excitation" (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We show that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. We further demonstrate that SE blocks bring significant improvements in performance for existing state-of-the-art CNNs at slight additional computational cost. Squeeze-and-Excitation Networks formed the foundation of our ILSVRC 2017 classification submission which won first place and reduced the top-5 error to 2.251%, surpassing the winning entry of 2016 by a relative improvement of ~25%. Models and code are available at this https URL.

read more

Citations
More filters
Journal ArticleDOI

MAGF-Net: A multiscale attention-guided fusion network for retinal vessel segmentation

TL;DR: MAGF-Net as mentioned in this paper proposes a multiscale attention-guided fusion network for retinal vessel segmentation, which takes full advantage of the channel information from deep layers and the spatial information from shallow layers.
Journal ArticleDOI

A Dynamic Residual Self-Attention Network for Lightweight Single Image Super-Resolution

TL;DR: In this paper , the authors proposed a dynamic residual self-attention network (DRSAN) for lightweight SISR, while focusing on the automated design of residual connections between building blocks.
Journal ArticleDOI

A Machine Learning–based Direction-of-origin Filter for the Identification of Radio Frequency Interference in the Search for Technosignatures

TL;DR: In this article , a CNN-based DoO filter was proposed to detect whether a signal detected in one scan is also present in another scan, with precision and recall values of 99.15% and 97.81%, respectively.
Journal ArticleDOI

Deep learning for gastroscopic images: computer-aided techniques for clinicians

TL;DR: In this paper , a review of the latest publications on deep learning applications in overcoming disease-related and nondisease-related gastroscopy challenges is presented, and some key issues to be handled before the clinical application of deep learning technology and the future direction of disease-and non-disease related applications of DNNs to gastro-scopy are discussed.
Journal ArticleDOI

Chess AI: Competing Paradigms for Machine Intelligence

- 14 Apr 2022 - 
TL;DR: In this article , the authors compare the performance of two chess engines, Stockfish and Leela Chess Zero, on the Plaskett's Puzzle, and show that Stockfish outperforms LCZero on the puzzle.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Related Papers (5)