scispace - formally typeset
Open AccessPosted Content

Energy-Aware Load Balancing in Content Delivery Networks

Reads0
Chats0
TLDR
In this paper, the authors propose techniques to turn off CDN servers during periods of low load while seeking to balance three key design goals: maximize energy reduction, minimize the impact on client-perceived service availability (SLAs), and limit the frequency of on-off server transitions to reduce wear-and-tear and its impact on hardware reliability.
Abstract
Internet-scale distributed systems such as content delivery networks (CDNs) operate hundreds of thousands of servers deployed in thousands of data center locations around the globe. Since the energy costs of operating such a large IT infrastructure are a significant fraction of the total operating costs, we argue for redesigning CDNs to incorporate energy optimizations as a first-order principle. We propose techniques to turn off CDN servers during periods of low load while seeking to balance three key design goals: maximize energy reduction, minimize the impact on client-perceived service availability (SLAs), and limit the frequency of on-off server transitions to reduce wear-and-tear and its impact on hardware reliability. We propose an optimal offline algorithm and an online algorithm to extract energy savings both at the level of local load balancing within a data center and global load balancing across data centers. We evaluate our algorithms using real production workload traces from a large commercial CDN. Our results show that it is possible to reduce the energy consumption of a CDN by more than 55% while ensuring a high level of availability that meets customer SLA requirements and incurring an average of one on-off transition per server per day. Further, we show that keeping even 10% of the servers as hot spares helps absorb load spikes due to global flash crowds with little impact on availability SLAs. Finally, we show that redistributing load across proximal data centers can enhance service availability significantly, but has only a modest impact on energy savings.

read more

Citations
More filters
Journal ArticleDOI

Data Center Energy Consumption Modeling: A Survey

TL;DR: An in-depth study of the existing literature on data center power modeling, covering more than 200 models, organized in a hierarchical structure with two main branches focusing on hardware-centric and software-centric power models.
Proceedings ArticleDOI

It's not easy being green

TL;DR: This paper uses FORTE to show that carbon taxes or credits are impractical in incentivizing carbon output reduction by providers of large-scale Internet applications and can reduce carbon emissions by 10% without increasing the mean latency nor the electricity bill.
Journal ArticleDOI

Fog of Everything: Energy-Efficient Networked Computing Architectures, Research Challenges, and a Case Study

TL;DR: It is pointed out that the integration of the FC and IoE paradigms may give rise to opportunities for new applications in the realms of the IoE, Smart City, Industry 4.0, and Big Data Streaming while introducing new open issues.
Journal ArticleDOI

Moving Big Data to The Cloud: An Online Cost-Minimizing Approach

TL;DR: This work studies timely, cost-minimizing upload of massive, dynamically-generated, geo-dispersed data into the cloud, for processing using a MapReduce-like framework, and proposes two online algorithms: an online lazy migration (OLM) algorithm and a randomized fixed horizon control (RFHC) algorithm.
Journal ArticleDOI

Virtual Machine Consolidation with Multiple Usage Prediction for Energy-Efficient Cloud Data Centers

TL;DR: A virtual machine consolidation algorithm with multiple usage prediction (VMCUP-M) to improve the energy efficiency of cloud data centers and reduces the number of migrations and the power consumption of the servers while complying with the service level agreement.
References
More filters
Journal ArticleDOI

The Case for Energy-Proportional Computing

TL;DR: Energy-proportional designs would enable large energy savings in servers, potentially doubling their efficiency in real-life use, particularly the memory and disk subsystems.
Proceedings ArticleDOI

Managing energy and server resources in hosting centers

TL;DR: Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.
Journal ArticleDOI

Worldwide electricity used in data centers

TL;DR: This study estimates historical electricity use by data centers worldwide and regionally on the basis of more detailed data than were available for previous assessments, including electricity used by servers, data center communications, and storage equipment.
Proceedings ArticleDOI

Cutting the electric bill for internet-scale systems

TL;DR: The variation due to fluctuating electricity prices is characterized and it is argued that existing distributed systems should be able to exploit this variation for significant economic gains.
Journal ArticleDOI

The Akamai network: a platform for high-performance internet applications

TL;DR: An overview of the components and capabilities of the Akamai platform is given, and some insight into its architecture, design principles, operation, and management is offered.
Related Papers (5)