Proceedings ArticleDOI
Improved MHP Analysis
Aravind Sankar,Soham Chakraborty,V. Krishna Nandivada +2 more
- pp 207-217
Reads0
Chats0
TLDR
New approaches to do MHP analysis for X10-like languages that support async-finish-atomic parallelism are presented and a fast incremental MHP algorithm to derive all the statements that may run in parallel with a given statement is presented.Abstract:
May-Happen-in-Parallel (MHP) analysis is becoming the backbone of many of the parallel analyses and optimizations. In this paper, we present new approaches to do MHP analysis for X10-like languages that support async-finish-atomic parallelism. We present a fast incremental MHP algorithm to derive all the statements that may run in parallel with a given statement. We also extend the MHP algorithm of Agarwal et al. (answers if two given X10 statements may run in parallel, and under what condition) to improve the computational complexity, without compromising on the precision.read more
Citations
More filters
Proceedings ArticleDOI
May-happen-in-parallel analysis with static vector clocks
TL;DR: Using static vector clocks, this paper can drastically improve the efficiency of existing MHP analyses, without loss of precision: the performance speedup can be up to 1828X, with a much smaller memory footprint (reduced by up to 150X).
Journal ArticleDOI
Energy-Efficient Compilation of Irregular Task-Parallel Loops
TL;DR: This article proposes a scheme (X10Ergy) to obtain energy gains with minimal impact on the execution time, for task-parallel languages, such as X10, HJ, and so on.
Book ChapterDOI
May-Happen-in-Parallel Analysis with Returned Futures
TL;DR: This paper presents an MHP analysis for asynchronous programs that use futures as synchronization mechanism and is able to infer MHP relations that involve future variables that are returned by asynchronous tasks.
Proceedings ArticleDOI
On the fly MHP analysis
Sonali Saha,V. Krishna Nandivada +1 more
TL;DR: This manuscript proposes a novel scheme to perform incremental MHP analysis (on the fly) of programs written in task parallel languages like X10 to keep the MHP information up to date, in an IDE environment and introduces two new algorithms that deal with addition and removal of parallel constructs like finish, async, atomic, and sequential constructs, on the fly.
Posted Content
OpenMP aware MHP Analysis for Improved Static Data-Race Detection.
TL;DR: In this article, the authors present a data flow analysis based, fast, static data race checker in the LLVM compiler framework to detect race conditions in OpenMP programs and improve turnaround time and/or developer productivity.
References
More filters
Book
Introduction to Algorithms
TL;DR: The updated new edition of the classic Introduction to Algorithms is intended primarily for use in undergraduate or graduate courses in algorithms or data structures and presents a rich variety of algorithms and covers them in considerable depth while making their design and analysis accessible to all levels of readers.
Book ChapterDOI
Introduction to Algorithms
TL;DR: This chapter provides an overview of the fundamentals of algorithms and their links to self-organization, exploration, and exploitation.
Book
Advanced Compiler Design and Implementation
TL;DR: Advanced Compiler Design and Implementation by Steven Muchnick Preface to Advanced Topics
Book
On Finding Lowest Common Ancestors: Simplification and Parallelization
Baruch Schieber,Uzi Vishkin +1 more
TL;DR: A linear time and space preprocessing algorithm that enables us to answer each query in $O(1)$ time, as in Harel and Tarjan, which has the advantage of being simple and easily parallelizable.
Proceedings ArticleDOI
Work-first and help-first scheduling policies for async-finish task parallelism
TL;DR: This paper introduces a new work-stealing scheduler with compiler support for async-finish task parallelism that can accommodate both work- first and help-first scheduling policies, and provides insights on scenarios in which the help- first policy yields better results than the work-first policy and vice versa.