scispace - formally typeset
Open AccessProceedings ArticleDOI

More is Less: A More Complicated Network with Less Inference Complexity

Reads0
Chats0
TLDR
A novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity.
Abstract
In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Revisiting Parameter Sharing for Automatic Neural Channel Number Search

TL;DR: This paper proposes affine parameter sharing (APS) as a general formulation to unify and quantitatively analyze existing channel search algorithms and demonstrates the superiority of the proposed strategy in channel configuration against many state-of-the-art counterparts on benchmark datasets.
Posted Content

Pruning and Quantization for Deep Neural Network Acceleration: A Survey

TL;DR: In this article, the authors provide a survey on two types of network compression: pruning and quantization, and compare current techniques, analyze their strengths and weaknesses, present compressed network accuracy results on a number of frameworks, and provide practical guidance for compressing networks.
Posted Content

Dynamic Group Convolution for Accelerating Convolutional Neural Networks

TL;DR: This paper proposes dynamic group convolution (DGC) that adaptively selects which part of input channels to be connected within each group for individual samples on the fly, and has similar computational efficiency as the conventional group Convolution simultaneously.
Proceedings ArticleDOI

ILFO: Adversarial Attack on Adaptive Neural Networks

TL;DR: This paper proposes ILFO (Intermediate Output-Based Loss Function Optimization) attack against a common type of energy-saving neural networks, Adaptive Neural Networks (AdNN), the first attempt to attack the energy consumption of an AdNN.
Journal ArticleDOI

Spatially Adaptive Feature Refinement for Efficient Inference

TL;DR: In this article, the authors propose a spatially adaptive feature refinement (SAR) approach to reduce the spatial redundancy in CNNs by fusing information from two branches: one conducts standard convolution on input features at a lower spatial resolution and the other one selectively refines a set of regions at the original resolution.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)