scispace - formally typeset
Open AccessProceedings ArticleDOI

Densely Connected Convolutional Networks

TLDR
DenseNet as mentioned in this paper proposes to connect each layer to every other layer in a feed-forward fashion, which can alleviate the vanishing gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Abstract
Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections—one between each layer and its subsequent layer—our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less memory and computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

SGAS: Sequential Greedy Architecture Search

TL;DR: SGAS as mentioned in this paper divides the search procedure into sub-problems and chooses and prunes candidate operations in a greedy fashion to find state-of-the-art architectures for tasks such as image classification, point cloud classification and node classification in protein-protein interaction graphs with minimal computational cost.
Journal ArticleDOI

Toward multi-label sentiment analysis: a transfer learning based approach

TL;DR: This study proposes a transfer learning based approach tackling the aforementioned shortcomings of existing ABSA methods and proposes an advanced sentiment analysis method, namely Aspect Enhanced Sentiment Analysis (AESA) to classify text into sentiment classes with consideration of the entity aspects.
Proceedings ArticleDOI

Cars Can’t Fly Up in the Sky: Improving Urban-Scene Segmentation via Height-Driven Attention Networks

TL;DR: This paper exploits the intrinsic features of urban-scene images and proposes a general add-on module, called height-driven attention networks (HANet), for improving semantic segmentation for urban- scene images, and achieves a new state-of-the-art performance on the Cityscapes benchmark with a large margin among ResNet101 based segmentation models.
Book ChapterDOI

Hierarchical Dynamic Filtering Network for RGB-D Salient Object Detection

TL;DR: Liang et al. as mentioned in this paper integrated the features of different modalities through densely connected structures and used their mixed features to generate dynamic filters with receptive fields of different sizes, and designed a hybrid enhanced loss function to further optimize the results.
Journal ArticleDOI

Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot

TL;DR: The model was improved to make it more suitable for the recognition and segmentation of overlapped apples, and the recognition speed is faster, which can meet the requirements of the apple harvesting robot’s vision system.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)
Trending Questions (1)
How the densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation?

Densely connected structures address the challenges associated with the vanishing-gradient problem and feature propagation by alleviating the vanishing-gradient problem and strengthening feature propagation.