scispace - formally typeset
Open AccessProceedings ArticleDOI

Densely Connected Convolutional Networks

TLDR
DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Enlarging smaller images before inputting into convolutional neural network: zero-padding vs. interpolation

TL;DR: This study proposes zero-padding for resizing images to the same size and compares it with the conventional approach of scaling images up (zooming in) using interpolation, showing that zero- padding had no effect on the classification accuracy but considerably reduced the training time.
Posted Content

nnU-Net for Brain Tumor Segmentation

TL;DR: By incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation as well as several minor modifications to the nnUNet pipeline, nnU-Net is able to improve its segmentation performance substantially.
Posted Content

Enhanced Convolutional Neural Tangent Kernels

TL;DR: The resulting kernel, CNN-GP with LAP and horizontal flip data augmentation, achieves 89% accuracy, matching the performance of AlexNet, which is the best such result the authors know of for a classifier that is not a trained neural network.
Journal ArticleDOI

Monocular Depth Estimation Using Laplacian Pyramid-Based Depth Residuals

TL;DR: Weight standardization is applied to pre-activation convolution blocks of the decoder architecture to improve the flow of gradients and thus makes optimization easier, and the proposed method is effective for monocular depth estimation compared to state-of-the-art models.
Proceedings ArticleDOI

SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

TL;DR: This work presents SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity, which can train ResNet2500 that has 104 basic network layers on a 12GB K40c and dynamically allocates the memory for convolution workspaces to achieve the high performance.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)
Trending Questions (1)
How the densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation?

Densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation by alleviating the vanishing-gradient problem and strengthening feature propagation.