scispace - formally typeset
Proceedings ArticleDOI

SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks

Reads0
Chats0
TLDR
The Sparse CNN (SCNN) accelerator as discussed by the authors employs a dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements.
Abstract
Convolutional Neural Networks (CNNs) have emerged as a fundamental technology for machine learning. High performance and extreme energy efficiency are critical for deployments of CNNs, especially in mobile platforms such as autonomous vehicles, cameras, and electronic personal assistants. This paper introduces the Sparse CNN (SCNN) accelerator architecture, which improves performance and energy efficiency by exploiting the zero-valued weights that stem from network pruning during training and zero-valued activations that arise from the common ReLU operator. Specifically, SCNN employs a novel dataflow that enables maintaining the sparse weights and activations in a compressed encoding, which eliminates unnecessary data transfers and reduces storage requirements. Furthermore, the SCNN dataflow facilitates efficient delivery of those weights and activations to a multiplier array, where they are extensively reused; product accumulation is performed in a novel accumulator array. On contemporary neural networks, SCNN can improve both performance and energy by a factor of 2.7x and 2.3x, respectively, over a comparably provisioned dense CNN accelerator.

read more

Citations
More filters
Journal ArticleDOI

QuTiBench: Benchmarking Neural Networks on Heterogeneous Hardware

TL;DR: QuTiBench is a novel multi-tiered benchmarking methodology that supports algorithmic optimizations such as quantization and helps system developers understand the benefits and limitations of these novel compute architectures in regard to specific neural networks and will help drive future innovation.
Proceedings ArticleDOI

The Sparsity and Activation Analysis of Compressed CNN Networks in a HW CNN Accelerator Model

TL;DR: The result of sparsity increase by CNN compression on 6 representative CNN networks including a famous localization CNN network VGG16-SSD-300 is presented.
Proceedings ArticleDOI

High PE Utilization CNN Accelerator with Channel Fusion Supporting Pattern-Compressed Sparse Neural Networks

TL;DR: The software includes an ADMM-based method which compresses the patterns of convolution kernels with acceptable accuracy loss, and a Huffman encoding method which reduces index storage overhead, and the hardware is a fusion-enabled systolic architecture which can reduce PEs’ no-load rate and improve performance by supporting the channel fusion.
Proceedings ArticleDOI

Non-Blocking Simultaneous Multithreading: Embracing the Resiliency of Deep Neural Networks

TL;DR: NB-SMT as mentioned in this paper proposes a non-blocking implementation of SMT for DNN accelerators, where instead of opportunistically dispatching instructions while they wait in a reservation station for available hardware, it temporarily reduces the computation precision to accommodate all threads at once.
Journal ArticleDOI

PermCNN : Energy-Efficient Convolutional Neural Network Hardware Architecture With Permuted Diagonal Structure

TL;DR: PermCNN is proposed, an energy-efficient hardware architecture for permuted diagonal structured convolutional neural networks (CNNs) that delivers very high hardware performance for inference tasks on CNN models by fully utilizing the strong structured sparsity in the trained models.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Posted Content

Deep Residual Learning for Image Recognition

TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Related Papers (5)