scispace - formally typeset
Open AccessProceedings ArticleDOI

Densely Connected Convolutional Networks

TLDR
DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

WeatherBench: A Benchmark Data Set for Data-Driven Weather Forecasting

TL;DR: A benchmark data set for data‐driven medium‐range weather forecasting (specifically 3–5 days), a topic of high scientific interest for atmospheric and computer scientists alike, is presented and a simple and clear evaluation metrics are proposed which will enable a direct comparison between different methods.
Proceedings ArticleDOI

MaxViT: Multi-Axis Vision Transformer

TL;DR: This paper introduces an efficient and scalable attention model, which consists of two aspects: blocked local and dilated global attention, and expresses strong generative modeling capability on ImageNet, demonstrating the superior potential of MaxViT blocks as a universal vision module.
Proceedings ArticleDOI

Fast Sparse ConvNets

TL;DR: This work introduces a family of efficient sparse kernels for several hardware platforms, and shows that sparse versions of MobileNet v1 and Mobile net v2 architectures substantially outperform strong dense baselines on the efficiency-accuracy curve.
Posted Content

Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions

TL;DR: This paper proposes an extension of U-Net, Bi-directional ConvLSTM U- net with Densely connected convolutions (BCDU-Net), for medical image segmentation, in which the full advantages of U -Net, bi- directional Conv lSTM (BConvL STM) and the mechanism of dense convolutions are taken.
Journal ArticleDOI

Scoring of tumor-infiltrating lymphocytes: From visual estimation to machine learning

TL;DR: Different automated TIL scoring approaches are discussed ranging from classical image segmentation, where cell boundaries are identified and the resulting objects classified according to shape properties, to machine learning-based approaches that directly classify cells without segmentation but rely on large amounts of training data.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)
Trending Questions (1)
How the densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation?

Densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation by alleviating the vanishing-gradient problem and strengthening feature propagation.