scispace - formally typeset
Journal ArticleDOI

On the granularity and clustering of directed acyclic task graphs

Reads0
Chats0
TLDR
It is proved that every nonlinear clustering of a coarse grain DAG can be transformed into a linear clustering that has less or equal parallel time than the nonlinear one.
Abstract
The authors consider the impact of the granularity on scheduling task graphs. Scheduling consists of two parts, the processors assignment of tasks, also called clustering, and the ordering of tasks for execution in each processor. The authors introduce two types of clusterings: nonlinear and linear clusterings. A clustering is nonlinear if two parallel tasks are mapped in the same cluster otherwise it is linear. Linear clustering fully exploits the natural parallelism of a given directed acyclic task graph (DAG) while nonlinear clustering sequentializes independent tasks to reduce parallelism. The authors also introduce a new quantification of the granularity of a DAG and define a coarse grain DAG as the one whose granularity is greater than one. It is proved that every nonlinear clustering of a coarse grain DAG can be transformed into a linear clustering that has less or equal parallel time than the nonlinear one. This result is used to prove the optimality of some important linear clusterings used in parallel numerical computing. >

read more

Citations
More filters
Journal ArticleDOI

Static scheduling algorithms for allocating directed task graphs to multiprocessors

TL;DR: A taxonomy that classifies 27 scheduling algorithms and their functionalities into different categories is proposed, with each algorithm explained through an easy-to-understand description followed by an illustrative example to demonstrate its operation.
Journal ArticleDOI

Pegasus, a workflow management system for science automation

TL;DR: An integrated view of the Pegasus system is provided, showing its capabilities that have been developed over time in response to application needs and to the evolution of the scientific computing platforms.
Journal ArticleDOI

DSC: scheduling parallel tasks on an unbounded number of processors

TL;DR: A low-complexity heuristic for scheduling parallel tasks on an unbounded number of completely connected processors, named the dominant sequence clustering algorithm (DSC), which guarantees a performance within a factor of 2 of the optimum for general coarse-grain DAG's.
Journal ArticleDOI

A Comparison of Clustering Heuristics for Scheduling Directed Acyclic Graphs on Multiprocessors

TL;DR: This paper identifies important characteristics of clustering algorithms and proposes a general framework for analyzing and evaluating such algorithms and presents an analytic performance comparison of Dominant Sequence Clustering (DSC), explaining why DSC is superior to other algorithms.
Journal ArticleDOI

Towards a comprehensive assessment of model structural adequacy

TL;DR: In this article, a unified conceptual framework for modeling the terrestrial hydrosphere is proposed, based on philosophical perspectives from the groundwater, unsaturated zone, terrestrial hydrometeorology, and surface water communities.
References
More filters
Book

Matrix computations

Gene H. Golub
Journal ArticleDOI

A set of level 3 basic linear algebra subprograms

TL;DR: This paper describes an extension to the set of Basic Linear Algebra Subprograms targeted at matrix-vector operations that should provide for efficient and portable implementations of algorithms for high-performance computers.
Journal ArticleDOI

VLSI Array processors

Sun-Yuan Kung
- 01 Jan 1985 - 
TL;DR: A general overview of VLSI array processors and a unified treatment from algorithm, architecture, and application perspectives is provided in this article, where a broad range of application domains including digital filtering, spectrum estimation, adaptive array processing, image/vision processing, and seismic and tomographic signal processing.

VLSI array processors

Sun-Yuan Kung
TL;DR: A general overview of VLSI array processors is provided and a unified treatment from algorithm, architecture, and application perspectives is provided.
Book

Partitioning and Scheduling Parallel Programs for Multiprocessing

Vivek Sarkar
TL;DR: Sarkar et al. as mentioned in this paper presented two approaches to automatic partitioning and scheduling so that the same parallel program can be made to execute efficiently on widely different multiprocessors, based on a macro dataflow model and a compile time scheduling model.
Related Papers (5)