scispace - formally typeset
Open AccessProceedings ArticleDOI

Managing performance vs. accuracy trade-offs with loop perforation

Reads0
Chats0
TLDR
The results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factors of seven) while changing the result that the application produces by less than 10%.
Abstract
Many modern computations (such as video and audio encoders, Monte Carlo simulations, and machine learning algorithms) are designed to trade off accuracy in return for increased performance. To date, such computations typically use ad-hoc, domain-specific techniques developed specifically for the computation at hand. Loop perforation provides a general technique to trade accuracy for performance by transforming loops to execute a subset of their iterations. A criticality testing phase filters out critical loops (whose perforation produces unacceptable behavior) to identify tunable loops (whose perforation produces more efficient and still acceptably accurate computations). A perforation space exploration algorithm perforates combinations of tunable loops to find Pareto-optimal perforation policies. Our results indicate that, for a range of applications, this approach typically delivers performance increases of over a factor of two (and up to a factor of seven) while changing the result that the application produces by less than 10%.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Approximate computing in the nanoscale era

TL;DR: This paper focuses on approximate arithmetic circuits for computer vision and machine learning, which have an excellent resiliency to computation errors and makes the increase of efficiency of arithmetic circuits a keypoint.
Posted Content

GEVO: GPU Code Optimization using Evolutionary Computation

TL;DR: GEVO (Gpu optimization using EVOlutionary computation) is a tool for automatically discovering optimization opportunities and tuning the performance of GPU kernels in the LLVM representation and improves performance on desired criteria while retaining required functionality.
Proceedings ArticleDOI

Automating Dependence-Aware Parallelization of Machine Learning Training on Distributed Shared Memory

TL;DR: The evaluation shows that for a number of ML applications, Orion can parallelize a serial program while preserving critical dependences and thus achieve a significantly faster convergence rate than data-parallel programs and a matching convergence rate and comparable computation throughput to state-of-the-art manual parallelizations including model-par parallel programs.
Proceedings ArticleDOI

Exploring Non-Volatility of Non-Volatile Memory for High Performance Computing Under Failures

TL;DR: This work introduces a fundamentally new methodology to handle HPC under failures based on NVM, and attempts to use remaining data objects in NVM (possibly stale ones because of losing data updates in caches) to restart crashed applications.
Proceedings ArticleDOI

Information Hiding behind Approximate Computation

TL;DR: It is argued that information could be hidden behind approximate computation without compromising the computation accuracy or energy efficiency.
References
More filters
Proceedings ArticleDOI

LLVM: a compilation framework for lifelong program analysis & transformation

TL;DR: The design of the LLVM representation and compiler framework is evaluated in three ways: the size and effectiveness of the representation, including the type information it provides; compiler performance for several interprocedural problems; and illustrative examples of the benefits LLVM provides for several challenging compiler problems.
Journal ArticleDOI

The JPEG still picture compression standard

TL;DR: The Baseline method has been by far the most widely implemented JPEG method to date, and is sufficient in its own right for a large number of applications.
Proceedings ArticleDOI

The PARSEC benchmark suite: characterization and architectural implications

TL;DR: This paper presents and characterizes the Princeton Application Repository for Shared-Memory Computers (PARSEC), a benchmark suite for studies of Chip-Multiprocessors (CMPs), and shows that the benchmark suite covers a wide spectrum of working sets, locality, data sharing, synchronization and off-chip traffic.
Related Papers (5)