scispace - formally typeset
Open AccessProceedings ArticleDOI

More is Less: A More Complicated Network with Less Inference Complexity

Reads0
Chats0
TLDR
A novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity.
Abstract
In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Class-Discriminative CNN Compression

TL;DR: In this paper , a coarse class discrimination scheme for early layers and a fine one for later layers is proposed to facilitate the CNNs training goal, which can enhance hidden layers' linear separability and classification accuracy of the student.
Posted Content

Skip-Convolutions for Efficient Video Processing

TL;DR: Skip-Convolution as discussed by the authors proposes skip-convolutions to leverage the large amount of redundancies in video streams and save computations by replacing all convolutions with skip-convolutions in two state-of-the-art architectures.
Journal ArticleDOI

Linear Combination Approximation of Feature for Channel Pruning

TL;DR: This paper proposes a novel channel pruning method, namely, the linear combination approximation of features (LCAF), which approximate each feature map by a linear combination of other feature maps in the same layer, and then remove the most approximated one.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)