scispace - formally typeset
Open AccessJournal ArticleDOI

I-structures: data structures for parallel computing

TLDR
It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures, and it is shown that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.
Abstract
It is difficult to achieve elegance, efficiency, and parallelism simultaneously in functional programs that manipulate large data structures. We demonstrate this through careful analysis of program examples using three common functional data-structuring approaches-lists using Cons, arrays using Update (both fine-grained operators), and arrays using make-array (a “bulk” operator). We then present I-structure as an alternative and show elegant, efficient, and parallel solutions for the program examples in Id, a language with I-structures. The parallelism in Id is made precise by means of an operational semantics for Id as a parallel reduction system. I-structures make the language nonfunctional, but do not lose determinacy. Finally, we show that even in the context of purely functional languages, I-structures are invaluable for implementing functional data abstractions.

read more

Content maybe subject to copyright    Report






Citations
More filters
Journal ArticleDOI

The interdisciplinary study of coordination

TL;DR: This survey characterizes an emerging research area, sometimes called coordination theory, that focuses on the interdisciplinary study of coordination, that uses and extends ideas about coordination from disciplines such as computer science, organization theory, operations research, economics, linguistics, and psychology.
Journal ArticleDOI

Scheduling multithreaded computations by work stealing

TL;DR: This paper gives the first provably good work-stealing scheduler for multithreaded computations with dependencies, and shows that the expected time to execute a fully strict computation on P processors using this scheduler is 1:1.
Journal ArticleDOI

Dataflow process networks

TL;DR: Dataflow process networks are shown to be a special case of Kahn process networks, a model of computation where a number of concurrent processes communicate through unidirectional FIFO channels, where writes to the channel are nonblocking, and reads are blocking.
Proceedings ArticleDOI

Scheduling multithreaded computations by work stealing

TL;DR: This paper gives the first provably good work-stealing scheduler for multithreaded computations with dependencies, and shows that the expected time T/sub P/ to execute a fully strict computation on P processors using this work- Stealing Scheduler is T/ Sub P/=O(T/sub 1//P+T/ sub /spl infin//), where T/ sub 1/ is the minimum serial execution time of the multith readed computation and T/
Journal ArticleDOI

Programming parallel algorithms

TL;DR: This research on parallel algorithms has not only improved the general understanding of par-allelism but in several cases has led to improvements in sequential algorithms.
References
More filters
Book

The implementation of functional programming languages

Peyton Jones, +1 more
TL;DR: My 1987 book is now out of print, but it is available here in its entirety in PDF form, in one of two formats: single-page portrait double-page landscape and fully searchable, thanks to OCR and Norman Ramsey.
Proceedings ArticleDOI

Making data structures persistent

TL;DR: This paper develops simple, systematic, and efficient techniques for making linked data structures persistent, and uses them to devise persistent forms of binary search trees with logarithmic access, insertion, and deletion times and O (1) space bounds for insertion and deletion.
Journal ArticleDOI

Advanced compiler optimizations for supercomputers

TL;DR: Compilers for vector or multiprocessor computers must have certain optimization features to successfully generate parallel code to be able to operate on parallel systems.
Proceedings ArticleDOI

Dependence graphs and compiler optimizations

TL;DR: This paper defines such graphs and discusses two kinds of transformations, simple rewriting transformations that remove dependence arcs and abstraction transformations that deal more globally with a dependence graph.
Journal ArticleDOI

Executing a program on the MIT tagged-token dataflow architecture

TL;DR: An overview of current thinking on dataflow architecture is provided by describing example Id programs, their compilation to dataflow graphs, and their execution on the TTDA, a multiprocessor architecture.
Related Papers (5)