scispace - formally typeset
Open AccessProceedings ArticleDOI

Densely Connected Convolutional Networks

TLDR
DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Using YOLOv3 Algorithm with Pre- and Post-Processing for Apple Detection in Fruit-Harvesting Robot

TL;DR: The proposed pre- and post-processing techniques made it possible to adapt the YOLOv3 algorithm to be used in an apple-harvesting robot machine vision system, providing an average apple detection time of 19 ms and error rates less than in all known similar systems.
Journal ArticleDOI

A survey of swarm and evolutionary computing approaches for deep learning

TL;DR: A comprehensive survey of the most recent approaches involving the hybridization of SI and EC algorithms for DL, the architecture of DNNs, and DNN training to improve the classification accuracy is presented.
Proceedings ArticleDOI

A Generalized Loss Function for Crowd Counting and Localization

TL;DR: In this article, a generalized loss function was proposed to learn density maps for crowd counting and localization, which outperformed other losses on four large-scale datasets for counting, and achieves the best localization performance on NWPU-Crowd and UCF-QNRF.
Journal ArticleDOI

Deep Coupled Dense Convolutional Network With Complementary Data for Intelligent Fault Diagnosis

TL;DR: This paper proposes a deep coupled dense convolutional network (CDCN) with complementary data to integrate information fusion, feature extraction, and fault classification together for intelligent diagnosis.
Proceedings ArticleDOI

Towards Visually Explaining Variational Autoencoders

TL;DR: This work proposes the first technique to visually explain VAEs by means of gradient-based attention, and presents methods to generate visual attention from the learned latent space, and shows how such attention explanations serve more than just explaining VAE predictions.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)
Trending Questions (1)
How the densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation?

Densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation by alleviating the vanishing-gradient problem and strengthening feature propagation.