scispace - formally typeset
Open AccessProceedings ArticleDOI

SmartNet: a scheduling framework for heterogeneous computing

R. Freund, +3 more
- pp 514-521
TLDR
The issues associated with automatically managing a heterogeneous environment are reviewed, SmartNet's architecture and implementation are described, and performance data is summarized.
Abstract
SmartNet is a scheduling framework for heterogeneous systems. Preliminary conservative simulation results for one of the optimization criteria, show a 1.21 improvement over Load Balancing and a 25.9 improvement over Limited Best Assignment, the two policies that evolved from homogeneous environments. SmartNet achieves these improvements through the implementation of several innovations. It recognizes and capitalizes on the inherent heterogeneity of computers in today's distributed environments; it recognizes and accounts for the underlying non-determinism of the distributed environment; it implements an original partitioning approach, making runtime prediction more accurate and useful; it effectively schedules based on all shared resource usage, including network characteristics; and it uses statistical and filtering techniques, making a greater amount of prediction information available to the scheduling engine. In this paper, the issues associated with automatically managing a heterogeneous environment are reviewed, SmartNet's architecture and implementation are described, and performance data is summarized.

read more

Citations
More filters
Proceedings ArticleDOI

Scheduling resources in multi-user, heterogeneous, computing environments with SmartNet

TL;DR: The SmartNet resource scheduling system is described and compared to two different resource allocation strategies: load balancing and user directed assignment, and results indicate that, for the computer environments simulated, SmartNet outperforms both load balancingand user directed assignments, based on the maximum time users must wait for their tasks to finish.
Proceedings ArticleDOI

The relative performance of various mapping algorithms is independent of sizable variances in run-time predictions

TL;DR: The author studies the performance of four mapping algorithms and concludes that the use of intelligent mapping algorithms is beneficial, even when the expected time for completion of a job is not deterministic.
Proceedings ArticleDOI

A dynamic matching and scheduling algorithm for heterogeneous computing systems

TL;DR: The hybrid remapper is based on a centralized policy and improves a statically, obtained initial matching and scheduling by remapping to reduce the overall execution time.
Proceedings ArticleDOI

Predictive application-performance modeling in a computational grid environment

TL;DR: This paper describes and evaluates the application of three local learning algorithms-nearest-neighbor, weighted-average, and locally-weighted polynomial regression-for the prediction of run-specific resource-usage on the basis ofRun-time input parameters supplied to tools.
Journal ArticleDOI

Techniques for mapping tasks to machines in heterogeneous computing systems

TL;DR: The goal of this invited keynote paper is to introduce the reader to some of the different distributed and parallel types of HC environments and examine some research issues for HC systems consisting of a network of different machines.
References
More filters
Book

Computers and Intractability: A Guide to the Theory of NP-Completeness

TL;DR: The second edition of a quarterly column as discussed by the authors provides a continuing update to the list of problems (NP-complete and harder) presented by M. R. Garey and myself in our book "Computers and Intractability: A Guide to the Theory of NP-Completeness,” W. H. Freeman & Co., San Francisco, 1979.
Journal ArticleDOI

Reliable communication in the presence of failures

TL;DR: A review of several uses for the protocols in the ISIS system, which supports fault-tolerant resilient objects and bulletin boards, illustrates the significant simplification of higher level algorithms made possible by the approach.

Mach: A New Kernel Foundation for UNIX Development.

TL;DR: Mach as mentioned in this paper is a multiprocessor operating system kernel and environment under development at Carnegie Mellon University, which provides a new foundation for UNIX development that spans networks of uniprocessors and multi-processors.
Related Papers (5)