Open Access
Design and Evaluation of Dynamic Replication Strategies for a High-Performance Data Grid
TLDR
A simulation framework that is developed to model a grid scenario, which enables comparative studies of alternative dynamic replication strategies for three different kinds of access patterns, and shows that the best strategy has significant savings in latency and bandwidth consumption if the access patterns contain a moderate amount of geographical locality.Abstract:
Physics experiments that generate large amounts of data need to be able to share it with researchers around the world. High performance grids facilitate the distribution of such data to geographically remote places. Dynamic replication can be used as a technique to reduce bandwidth consumption and access latency in accessing these huge amounts of data. We describe a simulation framework that we have developed to model a grid scenario, which enables comparative studies of alternative dynamic replication strategies. We present preliminary results obtained with this simulator, in which we evaluate the performance of six different replication strategies for three different kinds of access patterns. The simulation results show that the best strategy has significant savings in latency and bandwidth consumption if the access patterns contain a moderate amount of geographical locality.read more
Citations
More filters
Journal ArticleDOI
A prediction-based dynamic replication strategy for data-intensive applications
TL;DR: The simulation results prove that the proposed strategy outperforms the existing strategies and reduces the data availability time, data access time and make span.
An Effective Weighted Data Replication Strategy for Data Grid
TL;DR: The experiment results show that MLALW strategy gives better performance compared to the other algorithms and prevents unnecessary creation of replica which leads to efficient storage usage.
Journal ArticleDOI
A data replication algorithm for groups of files in data grids
TL;DR: A new dynamic data replication algorithm named Popular Groups of Files Replication (PGFR) is introduced, based on an assumption: users in a Virtual Organization have similar interests in groups of files and builds a connectivity graph to recognize a group of dependent files in each grid site and replicates the most Popular Group of Files to each grid sites, thus increasing the local availability.
Journal ArticleDOI
Scheduling Algorithm with Potential Behaviors
TL;DR: This paper considers the influence of potential behaviors on the access cost, and proposes a data-intensive job scheduling algorithm with potential behaviors that has better performance in mean job time of all jobs, total number of replications,total number of local files accesses and effective network usage than the scheduling algorithm based on access cost.
Proceedings ArticleDOI
Distributed Popularity Based Replica Placement in Data Grid Environments
TL;DR: This paper introduces a highly distributed replica placement algorithm for hierarchical data grids that exploits data access histories to identify popular files and determines optimal replication locations to improve access performance by minimizing replication overhead assuming a given traffic pattern.
References
More filters
Journal ArticleDOI
The Anatomy of the Grid: Enabling Scalable Virtual Organizations
TL;DR: The authors present an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing.
Posted Content
The Anatomy of the Grid - Enabling Scalable Virtual Organizations
TL;DR: This article reviews the "Grid problem," and presents an extensible and open Grid architecture, in which protocols, services, application programming interfaces, and software development kits are categorized according to their roles in enabling resource sharing.
Journal ArticleDOI
Summary cache: a scalable wide-area web cache sharing protocol
TL;DR: This paper demonstrates the benefits of cache sharing, measures the overhead of the existing protocols, and proposes a new protocol called "summary cache", which reduces the number of intercache protocol messages, reduces the bandwidth consumption, and eliminates 30% to 95% of the protocol CPU overhead, all while maintaining almost the same cache hit ratios as ICP.
Journal ArticleDOI
The data grid
TL;DR: In this paper, the authors introduce design principles for a data management architecture called the data grid, and describe two basic services that are fundamental to the design of a data grid: storage systems and metadata management.
Proceedings ArticleDOI
Summary cache: a scalable wide-area Web cache sharing protocol
TL;DR: This paper proposes a new protocol called "Summary Cache"; each proxy keeps a summary of the URLs of cached documents of each participating proxy and checks these summaries for potential hits before sending any queries, which enables cache sharing among a large number of proxies.