scispace - formally typeset
Open AccessProceedings ArticleDOI

More is Less: A More Complicated Network with Less Inference Complexity

Reads0
Chats0
TLDR
A novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity.
Abstract
In this paper, we present a novel and general network structure towards accelerating the inference process of convolutional neural networks, which is more complicated in network structure yet with less inference complexity. The core idea is to equip each original convolutional layer with another low-cost collaborative layer (LCCL), and the element-wise multiplication of the ReLU outputs of these two parallel layers produces the layer-wise output. The combined layer is potentially more discriminative than the original convolutional layer, and its inference is faster for two reasons: 1) the zero cells of the LCCL feature maps will remain zero after element-wise multiplication, and thus it is safe to skip the calculation of the corresponding high-cost convolution in the original convolutional layer, 2) LCCL is very fast if it is implemented as a 1*1 convolution or only a single filter shared by all channels. Extensive experiments on the CIFAR-10, CIFAR-100 and ILSCRC-2012 benchmarks show that our proposed network structure can accelerate the inference process by 32% on average with negligible performance drop.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

PnP-DETR: Towards Efficient Visual Analysis with Transformers

TL;DR: Zhang et al. as mentioned in this paper proposed an end-to-end poll and pool (PnP) sampling module, which can adaptively allocate its computation spatially to be more efficient.
Posted Content

Adaptive Pixel-wise Structured Sparse Network for Efficient CNNs

TL;DR: In this paper, a spatially adaptive framework is proposed to dynamically generate pixel-wise sparsity according to the input image, which can save 30%-70% MACs with a slightly drop in top-1 and top-5 accuracy.
Posted Content

DS-Net++: Dynamic Weight Slicing for Efficient Inference in CNNs and Transformers

TL;DR: In this paper, the authors propose a hardware-efficient dynamic inference regime, named dynamic weight slicing, which adaptively slice a part of network parameters for inputs with diverse difficulty levels, while keeping parameters stored statically and contiguously in hardware to prevent the extra burden of sparse computation.
Journal ArticleDOI

Multi-Domain Clustering Pruning: Exploring Space and Frequency Similarity Based on GAN

Mariann R. Piano, +1 more
- 01 Jul 2023 - 
TL;DR: Zhang et al. as discussed by the authors proposed a multi-domain structured pruning method based on clustering (MDCP) which seamlessly integrates sufficient information extraction and knowledge distillation within a GAN-based framework.
Book ChapterDOI

Searching for N:M Fine-grained Sparsity of Weights and Activations in Neural Networks

TL;DR: In this paper , a strategy based on Neural Architecture Search (NAS) is proposed to sparsify both activations and weights throughout the network, while utilizing the recent approach of N:M fine-grained structured sparsity that enables practical acceleration on dedicated GPUs.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Proceedings Article

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Related Papers (5)