scispace - formally typeset
Open Access

Compile time aanalysis for hardware transactional memory architectures

A. Chahar
Reads0
Chats0
TLDR
The Compiler insights to Transactional memory (CiT) tool is presented, an architecture independent static analyzer for parallel programs, which detects all potential data dependencies between parallel sections of a program and provides feedback about load-store instructions in a transaction, dependencies inside of a loop and branches, and severals warnings related to system calls which can affect the performance.
Abstract
Transactional Memory is a parallel programming paradigm in which tasks are executed, in forms of transactions, concurrently by different resources in a system and resolve conflicts between them at run-time. Conflicts, caused by data dependencies, result in aborts and restarts of transactions, thus, degrading the performance of the system. In case these data dependencies are known at compile time, then the transactions can be scheduled in a way that conflicts are avoided, thereby, reducing the number of aborts and improving significantly the system’s performance. This thesis presents the Compiler insights to Transactional memory (CiT) tool, an architecture independent static analyzer for parallel programs, which detects all potential data dependencies between parallel sections of a program. It provides feedback about load-store instructions in a transaction, dependencies inside of a loop and branches, and severals warnings related to system calls which can affect the performance. The efficiency of the tool was tested on an application including different types of induced data dependencies, as well as several applications in the STAMP benchmark suit. In the first experiment, a 20% performance improvement was observed when the two versions of the application were executed on the TMFv2 HTM simulator.

read more

Citations
More filters

Determining Performance Boundaries and Automatic Loop Optimization of High-Level System Specifications

TL;DR: A new profiler tool, cprof, that automatically determines, from high-level specifications, the degree of parallelism of a given source code, specified in C and C++ programming languages, making it possible for the designer to make performance trade-offs based on real design points.
References
More filters
Book

Computer Architecture: A Quantitative Approach

TL;DR: This best-selling title, considered for over a decade to be essential reading for every serious student and practitioner of computer design, has been updated throughout to address the most important trends facing computer designers today.
Book

Compilers: Principles, Techniques, and Tools

TL;DR: This book discusses the design of a Code Generator, the role of the Lexical Analyzer, and other topics related to code generation and optimization.
Journal ArticleDOI

Pin: building customized program analysis tools with dynamic instrumentation

TL;DR: The goals are to provide easy-to-use, portable, transparent, and efficient instrumentation, and to illustrate Pin's versatility, two Pintools in daily use to analyze production software are described.
Proceedings ArticleDOI

Valgrind: a framework for heavyweight dynamic binary instrumentation

TL;DR: Valgrind is described, a DBI framework designed for building heavyweight DBA tools that can be used to build more interesting, heavyweight tools that are difficult or impossible to build with other DBI frameworks such as Pin and DynamoRIO.
Proceedings ArticleDOI

Transactional memory: architectural support for lock-free data structures

TL;DR: Simulation results show that transactional memory matches or outperforms the best known locking techniques for simple benchmarks, even in the absence of priority inversion, convoying, and deadlock.