scispace - formally typeset
Proceedings ArticleDOI

1.1 Computing's energy problem (and what we can do about it)

Reads0
Chats0
TLDR
If the drive for performance and the end of voltage scaling have made power, and not the number of transistors, the principal factor limiting further improvements in computing performance, a new wave of innovative and efficient computing devices will be created.
Abstract
Our challenge is clear: The drive for performance and the end of voltage scaling have made power, and not the number of transistors, the principal factor limiting further improvements in computing performance. Continuing to scale compute performance will require the creation and effective use of new specialized compute engines, and will require the participation of application experts to be successful. If we play our cards right, and develop the tools that allow our customers to become part of the design process, we will create a new wave of innovative and efficient computing devices.

read more

Citations
More filters
Proceedings ArticleDOI

A Hardware-Centric Approach to Increase and Prune Regular Activation Sparsity in CNNs

TL;DR: In this article , a threshold-based technique is proposed to maximize and prune coarse-grained regular blockwise sparsity in activation feature maps during CNN inference on dedicated dataflow architectures.
Posted Content

On the Optimization of Behavioral Logic Locking for High-Level Synthesis.

TL;DR: In this paper, the authors propose a framework to optimize the use of behavioral logic locking for a given security metric, where the designer can implement different meta-heuristics to explore the design space and select where to apply logic locking.
Posted Content

Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification

TL;DR: In this article, the authors make the observation that graph neural networks are heavily limited by the representational power of the first layer, and they find that it is possible to radically simplify these models so long as the feature extraction layer is retained with minimal degradation to model performance.
Book ChapterDOI

Efficient Learning Algorithm Using Compact Data Representation in Neural Networks

TL;DR: This work investigates the effects of compact data representation on memory saving for network parameters in artificial neural networks while maintaining comparable accuracy in both training and inference phases, and proposes a dictionary based architecture that utilizes a limited number of floating-point entries for all the synaptic weights.
Proceedings ArticleDOI

Downscaling and Overflow-aware Model Compression for Efficient Vision Processors

TL;DR: In this paper , the authors propose to use overflow-aware training to reduce the range of quantized values in a model and restrict the channel's number of each layer to be the multiple of some value (e.g., 16).
References
More filters
Journal ArticleDOI

Design of ion-implanted MOSFET's with very small physical dimensions

TL;DR: This paper considers the design, fabrication, and characterization of very small Mosfet switching devices suitable for digital integrated circuits, using dimensions of the order of 1 /spl mu/.
Book

Low Power Digital CMOS Design

TL;DR: The Hierarchy of Limits of Power J.D. Stratakos, et al., and Low Power Programmable Computation coauthored with M.B. Srivastava, provide a review of the main approaches to Voltage Scaling Approaches.
Journal ArticleDOI

Towards energy-proportional datacenter memory with mobile DRAM

TL;DR: This work architects server memory systems using mobile DRAM devices, trading peak bandwidth for lower energy consumption per bit and more efficient idle modes, and demonstrates 3-5× lower memory power, better proportionality, and negligible performance penalties for data-center workloads.
Related Papers (5)