scispace - formally typeset
Open AccessProceedings ArticleDOI

Densely Connected Convolutional Networks

TLDR
DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

ESPNetv2: A Light-Weight, Power Efficient, and General Purpose Convolutional Neural Network

TL;DR: ESPNetv2 as discussed by the authors uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters.
Posted Content

Efficient Multi-objective Neural Architecture Search via Lamarckian Evolution

TL;DR: Lemonade as mentioned in this paper is an evolutionary algorithm for multi-objective architecture search that allows approximating the entire Paretofront of architectures under multiple objectives, such as predictive performance and number of parameters, in a single run of the method.
Proceedings ArticleDOI

Learning From Synthetic Data for Crowd Counting in the Wild

TL;DR: A data collector and labeler is developed which can generate the synthetic crowd scenes and simultaneously annotate them without any manpower, and a crowd counting method via domain adaptation is proposed, which can free humans from heavy data annotations.
Posted Content

Searching for A Robust Neural Architecture in Four GPU Hours.

TL;DR: The approach can be trained in an end-to-end fashion by gradient descent, named Gradient-based search using Differentiable Architecture Sampler (GDAS), and the discovered model obtains a test error of 2.82% with only 2.5M parameters, which is on par with the state-of-the-art.
Journal ArticleDOI

Recalibrating Fully Convolutional Networks With Spatial and Channel “Squeeze and Excitation” Blocks

TL;DR: This paper effectively incorporate the recently proposed “squeeze and excitation” (SE) modules for channel recalibration for image classification in three state-of-the-art F-CNNs and demonstrates a consistent improvement of segmentation accuracy on three challenging benchmark datasets.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)
Trending Questions (1)
How the densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation?

Densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation by alleviating the vanishing-gradient problem and strengthening feature propagation.