scispace - formally typeset
Journal ArticleDOI

Parallel Programmability and the Chapel Language

Reads0
Chats0
TLDR
A candidate list of desirable qualities for a parallel programming language is offered, and how these qualities are addressed in the design of the Chapel language is described, providing an overview of Chapel's features and how they help address parallel productivity.
Abstract
In this paper we consider productivity challenges for parallel programmers and explore ways that parallel language design might help improve end-user productivity. We offer a candidate list of desirable qualities for a parallel programming language, and describe how these qualities are addressed in the design of the Chapel language. In doing so, we provide an overview of Chapel's features and how they help address parallel productivity. We also survey current techniques for parallel programming and describe ways in which we consider them to fall short of our idealized productive programming model.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal Article

Julia: A Fresh Approach to Numerical Computing

TL;DR: The Julia programming language and its design is introduced---a dance between specialization and abstraction, which recognizes what remains the same after computation, and which is best left untouched as they have been built by the experts.
Posted Content

Julia: A Fresh Approach to Numerical Computing

TL;DR: The Julia programming language as discussed by the authors combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing, which is designed to be easy and fast.
Proceedings ArticleDOI

LogTM: log-based transactional memory

TL;DR: This paper presents a new implementation of transactional memory, log-based transactionalMemory (LogTM), that makes commits fast by storing old values to a per-thread log in cacheable virtual memory and storing new values in place.
Proceedings ArticleDOI

FaRM: fast remote memory

TL;DR: The design and implementation of FaRM is described, a new main memory distributed computing platform that exploits RDMA to improve both latency and throughput by an order of magnitude relative to state of the art main memory systems that use TCP/IP.
Proceedings ArticleDOI

Legion: expressing locality and independence with logical regions

TL;DR: A runtime system that dynamically extracts parallelism from Legion programs, using a distributed, parallel scheduling algorithm that identifies both independent tasks and nested parallelism.
References
More filters
Journal ArticleDOI

OpenMP: an industry standard API for shared-memory programming

L. Dagum, +1 more
TL;DR: At its most elemental level, OpenMP is a set of compiler directives and callable runtime library routines that extend Fortran (and separately, C and C++ to express shared memory parallelism) and leaves the base language unspecified.

MPI: A Message-Passing Interface Standard

TL;DR: This document contains all the technical features proposed for the interface and the goal of the Message Passing Interface, simply stated, is to develop a widely used standard for writing message-passing programs.
Book

MPI: The Complete Reference

TL;DR: MPI: The Complete Reference is an annotated manual for the latest 1.1 version of the standard that illuminates the more advanced and subtle features of MPI and covers such advanced issues in parallel computing and programming as true portability, deadlock, high-performance message passing, and libraries for distributed and parallel computing.
Journal ArticleDOI

PVM: Parallel virtual machine: a users' guide and tutorial for networked parallel computing

TL;DR: The PVM system, a heterogeneous network computing trends in distributed computing PVM overview other packages, and troubleshooting: geting PVM installed getting PVM running compiling applications running applications debugging and tracing debugging the system.
Proceedings ArticleDOI

Active messages: a mechanism for integrated communication and computation

TL;DR: It is shown that active messages are sufficient to implement the dynamically scheduled languages for which message driven machines were designed and, with this mechanism, latency tolerance becomes a programming/compiling concern.
Related Papers (5)