scispace - formally typeset
Journal ArticleDOI

Parallel Scripting for Applications at the Petascale and Beyond

TLDR
Parallel scripting extends this technique to allow for the rapid development of highly parallel applications that can run efficiently on platforms ranging from multicore workstations to petascale supercomputers.
Abstract
Scripting accelerates and simplifies the composition of existing codes to form more powerful applications. Parallel scripting extends this technique to allow for the rapid development of highly parallel applications that can run efficiently on platforms ranging from multicore workstations to petascale supercomputers.

read more

Citations
More filters
Journal ArticleDOI

Swift: A language for distributed parallel scripting

TL;DR: This work presents Swift's implicitly parallel and deterministic programming model, which applies external applications to file collections using a functional style that abstracts and simplifies distributed parallel execution.
Journal ArticleDOI

An algebraic approach for data-centric scientific workflows

TL;DR: This work proposes an algebraic approach (inspired by relational algebra) and a parallel execution model that enable automatic optimization of scientific workflows and demonstrates performance improvements of up to 226% compared to an ad-hoc workflow implementation.
Journal ArticleDOI

The parallel system for integrating impact models and sectors (pSIMS)

TL;DR: The pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility and to assess the efficiency gain attained.
Proceedings ArticleDOI

Opportunities and Challenges in Running Scientific Workflows on the Cloud

TL;DR: Why there has been such a gap between the two technologies, and what it means to bring Cloud and workflow together are analyzed, the key challenges in running Cloud workflow are presented, and the research opportunities in realizing workflows on the Cloud are discussed.
References
More filters
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This presentation explains how the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks.
Journal ArticleDOI

The GRID: Blueprint for a New Computing Infrastructure

TL;DR: The main purpose is to update the designers and users of parallel numerical algorithms with the latest research in the field and present the novel ideas, results and work in progress and advancing state-of-the-art techniques in the area of parallel and distributed computing for numerical and computational optimization problems in scientific and engineering application.
Proceedings ArticleDOI

Dryad: distributed data-parallel programs from sequential building blocks

TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.
Journal ArticleDOI

Parallel Programmability and the Chapel Language

TL;DR: A candidate list of desirable qualities for a parallel programming language is offered, and how these qualities are addressed in the design of the Chapel language is described, providing an overview of Chapel's features and how they help address parallel productivity.
Related Papers (5)