scispace - formally typeset
Open AccessJournal ArticleDOI

Ensemble-level Power Management for Dense Blade Servers

Reads0
Chats0
TLDR
This paper proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble") by discussing an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis.
Abstract
One of the key challenges for high-density servers (e.g., blades) is the increased costs in addressing the power and heat density associated with compaction. Prior approaches have mainly focused on reducing the heat generated at the level of an individual server. In contrast, this work proposes power efficiencies at a larger scale by leveraging statistical properties of concurrent resource usage across a collection of systems ("ensemble"). Specifically, we discuss an implementation of this approach at the blade enclosure level to monitor and manage the power across the individual blades in a chassis. Our approach requires low-cost hardware modifications and relatively simple software support. We evaluate our architecture through both prototyping and simulation. For workloads representing 132 servers from nine different enterprise deployments, we show significant power budget reductions at performances comparable to conventional systems.

read more

Content maybe subject to copyright    Report

Ensemble-level Power Management for Dense Blade Servers
Parthasarathy Ranganathan, Phil Leech, David Irwin*, and Jeffrey Chase*
Hewlett Packard
partha.ranganathan@hp.com, phil.leech@hp.com
*Duke University
irwin@cs.duke.edu, chase@cs.duke.edu
Abstract
One of the key challenges for high-density servers
(e.g., blades) is the increased costs in addressing the
power and heat density associated with compaction.
Prior approaches have mainly focused on reducing
the heat generated at the level of an individual server.
In contrast, this work proposes power efficiencies at a
larger scale by leveraging statistical properties of
concurrent resource usage across a collection of
systems (“ensemble”). Specifically, we discuss an
implementation of this approach at the blade
enclosure level to monitor and manage the power
across the individual blades in a chassis. Our
approach requires low-cost hardware modifications
and relatively simple software support. We evaluate
our architecture through both prototyping and
simulation. For workloads representing 132 servers
from nine different enterprise deployments, we show
significant power budget reductions at performances
comparable to conventional systems.
1. Introduction
The increasing power density of servers poses a key
challenge in enterprise data center environments. For
example, the rated power consumption of a typical
server is estimated to have increased by nearly a factor
of 10 over the past ten years [7]. Such increased power
densities can lead to a greater probability of thermal
failover, impacting the availability of these systems.
Additional cooling is required to avoid thermal
failover, leading to a dramatic increase in facility costs
for cooling. For example, a 30,000 square feet 10MW
data center can easily spend $2-$5 million for the
cooling infrastructure [16]. Additionally, cooling can
also require significant recurring costs. Every watt of
power consumed in the compute equipment needs an
additional 0.5 to 1W of power to operate the cooling
system [16]. That adds another $4-$8 million in yearly
operational costs. Similarly, there have been increases
in the costs for cooling at an individual server as well.
The increasing power density also poses significant
challenges in routing the large amounts of power
needed per rack for future systems. For example, the
power delivery in typical data centers is near 60 Amps
per rack. Even if the cooling problem can be solved for
future higher density systems, it is highly likely that
delivering current to these configurations will reach
the power delivery limits of most data centers.
Beyond power delivery and cooling, increased power
also has implications on the electricity costs for the
compute equipment. For a 10MW data center, this can
range in the millions of dollars per year [16].
Increasing energy consumption also has an
environmental impact (e.g., 4 millions tons of annual
carbon-dioxide emissions) and environmental agencies
worldwide are considering standards to regulate server
and data center power (e.g., EnergyStar, TopRunner).
These problems are likely to be exacerbated by recent
trends towards consolidation in data centers and
adoption of higher-density computer systems [18].
Blade servers in particular, have been roadmapped to
consume up to 55KW/rack more than a five-fold
increase in power density compared to recently
announced 10KW/rack systems [15].
Traditionally, power density and heat extraction issues
are addressed at the facilities level through changes in
the design and provisioning of power delivery and
cooling infrastructures (e.g., [16, 19]). However, these
involve greater capital investment and/or additional
transitioning costs. Furthermore, it is unlikely that
future increases in power densities can be addressed
purely at a facilities level.
At the systems level, there has been relatively little
work in the area of enterprise power management,
with most prior work focusing on mobile battery life
issues. There has been some work [4, 6, 17] on
algorithms to power-off or power-down servers when
they are not in use, but these focus mainly on the
average electricity consumption of individual servers.

In contrast to these approaches, our work proposes a
new approach based on power management across a
broader collection of individual servers. Our work
leverages observations culled from analyzing several
months of resource (and power usage) trends over
more than a hundred servers in several real-world
enterprise deployments. We find that enterprise
systems are typically underutilized. Across collections
of systems, there are large inefficiencies from
provisioning power and cooling resources for a worst-
case scenario, involving concurrent occurrence of
individual power consumption spikes, which almost
never happens in practice.
We leverage these common-case trends to propose a
new power budgeting approach across an ensemble
of systems. As a specific example, we discuss a new
blade architecture where the power is managed and
enforced at the level of the enclosure (or the chassis).
Such an architecture recognizes trends across multiple
systems and extracts power efficiencies at a larger
scale. This leads to significant reductions in the
requirements for power delivery, power consumption,
and cooling in the system. As a side benefit, this
approach also enables more flexibility in the choice of
component power budgets and allows for improved
low-cost designs for power-supply redundancy.
We discuss the high-level architecture of such a
solution and the specific implementation details of the
design. Overall, our approach requires low-cost
hardware modifications and small changes to the
software. We evaluate our design through both
prototyping and simulation. For the 132 real-world
enterprise server traces, our results show significant
power budget reductions up to 50% in the processor
component and up to 20% in the overall system power
with workload slowdown close to 0% in most cases.
The rest of the paper is organized as follows. Section 2
presents detailed information on enterprise resource
usage and power consumption to motivate our
approach. Section 3 discusses the design and
implementation of our architecture, and Section 4
provides an evaluation of the effectiveness of the
design and the various trade-offs. Section 5 presents a
qualitative discussion of other benefits from our
approach, and Section 6 discusses related work.
Section 7 concludes the paper.
2. Real-world Trends
In this section, we present detailed information on
resource usage in enterprise environments, including
long-term data over a spectrum of real-world
deployments. Our primary goal is to motivate and
quantify the key trends that we leverage in our power
management solution discussed next.
Resource usage as proxy for power consumption:
One of the challenges of focusing on “live” real-world
enterprise infrastructure is the lack of existing support
for fine-grained power monitoring. Given the ongoing
use of these servers in business-critical functions, we
could not shut down the machines to add the necessary
metering either. However, these environments either
already had rich support for measuring system
resource usage or allowed simple software scripts to
enable it in real time. Therefore, for the discussions in
this paper, we use the resource usage trends,
specifically, that of the processor, as a first-order
proxy for power consumption trends.
To validate this assumption, we performed some
experiments on configurations similar to the
environments that we considered. Our results showed
that, for these cases, the trends in resource utilization
were a good first-order proxy for the overall power
(albeit with some attenuation to account for some
constant factors). Furthermore, the processor power
consumption was the dominant (40-75%) and most
variable component of total server power; this
conclusion is consistent with earlier studies (e.g., [4]).
Therefore, we focus our discussion on processor
utilization. We also collected data on the memory,
network, and disk usage, and though not reported here
for lack of space, the trends discussed below apply
qualitatively to those as well. (Note that in Section 4,
where we discuss the results from our prototype on
an environment that we control we present data
measuring the entire system power that validates this
assumption further.)
Data collection: We studied the variation in the CPU
utilization for 132 servers from nine “live” enterprise
environments (including HP, Walmart, and others
who requested anonymity). These deployments run a
variety of application environments such as enterprise
resource planning, online transaction processing, data
warehousing, collaborative applications, IT and web
infrastructure workloads, backend client processing,
and application development and simulation
workloads. The data includes both traces that we
collected ourselves or had access to, as well as one
public trace that provided this information [2]. The
traces were collected over 3 to 10 weeks, at sampling
time periods ranging from 15 seconds to 5 minutes.

0
25
50
75
100
0 25 50 75 100
90th percentile utilization
Maximum utilization
Figure 1: Summary data on individual utilization trends of
132 enterprise servers. Each point represents a server.
Figure 1 summarizes the resource consumption
behavior of individual servers. Each point in the
scatter plot represents a server and shows the 90
th
percentile of utilization with respect to the maximum
utilization. Figure 2 presents the cumulative resource
usage statistics for the nine sites across all the servers.
For each site, we pick a representative 7-day trace
when all the servers are active, and at each time
stamp, add the CPU utilization of each server
(between 0-100%) to obtain an overall resource
utilization trace for the site. We present the average,
90
th
percentile, and maximum value for each case. The
“sum-peaks column presents the value obtained from
summing the peaks from the individual server
utilizations (from Figure 1). The “worst” column
shows the actual utilization that the system is
provisioned for. The “savings” column shows the
differences between the actual provisioning and the
maximum utilization.
Trends: Figures 1 and 2 summarize two key trends
relevant to our discussion.
Bursty, small-duration spikes: At an individual server
level, Figure 1 shows how resource utilization is low
and bursty, with spikes being relatively infrequent and
of small duration. For example, Figure 1 shows that
the 90
th
percentile of resource usage is often
significantly lower than the maximum utilization.
This has also been documented in several previous
studies and stems from seasonal variations in access
patterns, and common resource deployment practices.
Non-synchronized spikes: More interestingly, across a
large collection of servers, such as in a data center or
blade cluster, the probability of synchronized spikes on
all the servers at the same time is rather low. For
example, a server used for ATM transactions may
spike on Friday versus a server used for airline
transactions that spikes on Thursdays. Similarly,
payroll servers have increased utilizations at the end
of the month, which may not be concurrent with
asynchronous spikes of other servers timed with
advertising launches or product tape-outs. Time-zone
differences across different groups in a global
organization also shift the peak usage times. In all the
nine enterprise deployments we study, the sum of the
individual peak resource usages is significantly higher
than the peak of the total resource utilization. For
example, for Site 1, the peak of the entire solution is a
total CPU utilization (over 26 servers) around 300%.
In contrast, the sum of the peak utilizations of the
individual servers is 1100%, and the actual
provisioning is 2600%. The magnitude difference
between the provisioned worst case and the actual
worst-case utilizations is 88%.
Our data shows that these trends are relatively
general, and not limited to an individual site or mix of
workloads. For example, the world-cup infrastructure
represents usage patterns consistent with a one-time
event while the e-commerce infrastructure shows more
regular long-term usage patterns. Similarly, some sites
have servers with independent workloads (e.g.,
backend client) while others have multi-tier
interrelated workloads (e.g., e-commerce). However,
the trends are qualitatively similar in all the cases.
While we summarize instrumentation data gathered
from these sites, we also have anecdotal and less
detailed data from several other enterprise
deployments that match these trends. We do not claim
that all enterprise workloads share these trends;
however, our goal is to show that a large existing base
of enterprise workloads do share these trends.
Implications for power management: Given that
power consumption closely tracks resource usage, the
same trends exist for power consumption behavior in
these workloads, as well.
Current practice, however, is to design the power
budget for the worst-case individual system scenario.
This affects the provisioning of cooling (fans) and
power delivery (power supplies) in the server. Since
worst-case power spikes happen infrequently, this
leads to inefficient overprovisioning in the cooling and
power delivery at the system level.
When these systems operate in the context of a larger
collection of systems, such as a data center, the
inefficiencies are compounded. The total power rating
of the collection of systems is typically computed as
the sum of the individual worst-case ratings. Given

Site Workload and trace length Servers Avg 90th % Max
sum-
peaks
Worst Savings
1 Backend of pharmaceutical company 26 87 138 307 1128 2600
88%
2 Web hosting infrastructure for worldcup98 web site [2] 25 256 481 1166 1366 2500
53%
3 SAP-based business process application in large company 27 585 691 919 1654 2700
66%
4 E-commerce web site of a large retail company 15 83 166 234 591 1500
84%
5 Backend for thin enterprise clients - company 1 10 138 184 298 729 1000
70%
6 Backend for thin enterprise clients - company 2 14 102 159 287 1253 1400
80%
7 Front-end customer facing web site for large company 8 119 187 255 467 800
68%
8 Business processing workload in small company 3 78 132 225 278 300
25%
9 E-commerce web site of small company 4 90 136 197 228 400
51%
All sites 132 1540 1872 2682 7694 13200
80%
Figure 2: Cumulative resource utilization behavior for the nine sites. The last column summarizes the potential savings in
processor resource (and power) provisioning from
ensemble
-
level management.
that the chances of synchronized power peaks are low
(as with the CPU utilization), this leads to even
greater differences between the estimated worst-case
power and the actual peak power at this level. Further,
this estimated worst-case power is used when planning
the cooling and power delivery at these higher levels
(e.g., air-conditioning units, power distribution units),
and consequently, these also end up being
overprovisioned.
One option to address these overprovisioning
inefficiencies is to move the power budget
management to a higher level at a broader collection
of systems (“ensemble”). The key idea is to set the
power budget at the ensemble level to avoid excessive
overprovisioning. Individual bursty workloads can still
be handled within this overall power budget by
dynamically redistributing power budget to that server,
from other servers not currently requiring as much
power. In the cases when this is not possible,
performance throttling can be used to reduce power to
avoid redlining (temperature increase beyond a critical
threshold). The challenges involve careful design of
the hardware hooks as well as implementing the
policies that manage and enforce the budget.
3. Ensemble-level Power Budget Management
Below, we discuss the architecture for ensemble-level
power budget management. In this paper, we focus
primarily on blade servers, since their inherent design
provides for multiple servers housed in the same
enclosure with a common control point (in the
enclosure manager); however, our approach can be
easily adapted to non-blade servers as well.
3.1 High-level description
Functional architecture: Figure 3 presents a
conceptual diagram of our approach. The key
components are (1) a controller at the ensemble level,
and (2) a management agent at each blade. The
management agent provides local power monitoring
and control per server. The controller collects all the
local readings and estimates total power consumption
at the ensemble level. This information is then fed to a
policy-driven control engine that issues directives to
the individual blades on the next steps for power
control. For example, if the total power exceeds a pre-
determined power budget, the controller directs the
individual servers to throttle the power consumption
to bring the overall power back under the threshold
(e.g., through voltage scaling). The policy heuristics
can be implemented to minimize the impact on
performance for the end user, and may be used in
concert with higher-level service-level agreements
(SLA) for different workloads.
Benefits: This approach enables us to provision the
power budget of the ensemble to a value much lower
than the sum of the worst-case power for each of the
individual servers. This allows significant reductions
in the requirements for power delivery, power
consumption, and heat extraction in the system. This,
in turn, can lead to designs that use power supplies of
lower ratings (lower costs and better efficiencies),
consume less electricity (lower costs and better
environmental friendliness), and require reduced
investment in cooling equipment like fans and air-
conditioning (lower costs).
As an illustrative example, let us consider the cooling
requirements for a 500W blade enclosure with each
Individual system (blade)
Management agent
Measure/ monitor/ predict
Policy-driven control
Monitoring hooks
Power control hooks
Power budget
Individual system (blade)
Ensemble controller
Measure/ monitor/ predict
Policy-driven control
Monitoring hooks
Power control hooks
SLA
Individual system (blade)
Management agent
Measure/ monitor/ predict
Policy-driven control
Monitoring hooks
Power control hooks
Power budget
Individual system (blade)
Ensemble controller
Measure/ monitor/ predict
Policy-driven control
Monitoring hooks
Power control hooks
SLA
Figure 3: Ensemble-level power management. The key is
to consider power budget management across a broader
collection of

blade rated at 20W. On current systems, this requires
provisioning each blade with support for heat
dissipation up to 20W (heat sinks, etc) and support at
the blade enclosure level for heat dissipation of 500W
(fans, etc), and support at the data center level for heat
dissipation of 10KW per rack (air conditioning, etc.).
In contrast, consider the scenario where we implement
ensemble-level power management at the level of the
blade enclosure or chassis, and the power is set based
on the peak of the cumulative resource utilization. As
seen in Figure 2, this value is typically 25-90% lower
than the worst-case provisioning. The enclosure
power and cooling budget can be reduced significantly
without affecting any properties of the solution.
A more aggressive approach would set the power
budget to an even smaller value say, the 90
th
percentile of the cumulative power usage. Given the
bursty nature of the usage, this can achieve even more
savings (Figure 2 shows that this is an additional 25-
60% lower.), but at the expense of scenarios where
performance needs to be throttled to bring the system
within budget. The workload slowdown from such
throttling can be minimized by intelligently selecting
the blades to throttle. Additionally, one can judiciously
allow some spikes over budget as long as the heat can
be dissipated before redlining. This is further
discussed in Section 4.
Reduced enclosure power budgets also translate to
reduced rack-level cooling requirements in the data
center. The per-server cooling can also be reduced as
long as it is adequate to dissipate the heat under the
transient bursts the server encounters for the specific
policy adopted by the enclosure controller.
A similar discussion is applicable to the power
delivery to the system. However, the policies are more
constrained relative to the cooling case since transient
spikes over budgeted power will trip the fuse and need
to be prevented.
3.2 Enclosure-level Implementation
Below, we discuss the implementation of our approach
for a blade server enclosure. Figure 4 summarizes the
various elements of the architecture that we consider.
The enclosure is a rack-mountable chassis and
contains 20 blade bays, two gigabit Ethernet switches,
and an embedded enclosure management controller
called the Integrated Administrator Module (IAM).
The individual blades include the processor, chipset,
memory, hard drive, and network interfaces. In
addition, each blade also includes an ASIC that
functions as a blade management controller and
manages and controls the hardware, responds to
events, and communicates with the enclosure
manager. The changes for implementing our solution
consist of a few relatively straightforward hardware
additions at the blade level and changes to the
firmware at both the blade and enclosure levels.
Below, we discuss the key implementation issues in
more detail.
Choosing and enforcing the power budget: As
discussed earlier, the sum of the worst-case power
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Enclosure controller
(IAM)
Enclosure Firmware
- Resource monitor and predict
- Policy-driven throttling directives
- Initialization & heart-beat check
Enclosure
I2C bus
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Hot swap
controller
BladesEnclosure
Blade Firmware
- Data gather and report
- Power (request) control
- Initialization & heart-beat check
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Enclosure controller
(IAM)
Enclosure Firmware
- Resource monitor and predict
- Policy-driven throttling directives
- Initialization & heart-beat check
Enclosure
I2C bus
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Hot swap
controller
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Power
monitor
Hot swap
controller
Hot swap
controller
BladesEnclosure
Blade Firmware
- Data gather and report
- Power (request) control
- Initialization & heart-beat check
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Enclosure controller
(IAM)
Enclosure Firmware
- Resource monitor and predict
- Policy-driven throttling directives
- Initialization & heart-beat check
Enclosure
I2C bus
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Hot swap
controller
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Power
monitor
Hot swap
controller
Hot swap
controller
BladesEnclosure
Blade Firmware
- Data gather and report
- Power (request) control
- Initialization & heart-beat check
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Power supplies
Cooling
Gigabit
ethernet
switches
Gigabit
ethernet
switches
Enclosure controller
(IAM)
Enclosure Firmware
- Resource monitor and predict
- Policy-driven throttling directives
- Initialization & heart-beat check
Enclosure
I2C bus
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
RAMCPUROM
Southbridge
controller
Hard drive
NICNIC
USB
graphics
PCI
ATA/IDE
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Power
monitor
Hot swap
controller
Hot swap
controller
Blade
management
controller
Sensor
SMBUS
Thermal
monitor
Thermal
monitor
Thermal
diodes
Thermal
diodes
Power
monitor
Power
monitor
Hot swap
controller
Hot swap
controller
BladesEnclosure
Blade Firmware
- Data gather and report
- Power (request) control
- Initialization & heart-beat check
Figure 4: Implementation of enclosure-level power management.

Citations
More filters
Proceedings ArticleDOI

Power provisioning for a warehouse-sized computer

TL;DR: This paper presents the aggregate power usage characteristics of large collections of servers for different classes of applications over a period of approximately six months, and uses the modelling framework to estimate the potential of power management schemes to reduce peak power and energy usage.
Journal ArticleDOI

Optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in Cloud data centers

TL;DR: A competitive analysis is conducted and competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems are proved, and novel adaptive heuristics for dynamic consolidation of VMs are proposed based on an analysis of historical data from the resource usage by VMs.
Book

Computer Architecture, Fifth Edition: A Quantitative Approach

TL;DR: The Fifth Edition of Computer Architecture focuses on this dramatic shift in the ways in which software and technology in the "cloud" are accessed by cell phones, tablets, laptops, and other mobile computing devices.
Proceedings ArticleDOI

pMapper: power and migration cost aware application placement in virtualized systems

TL;DR: This work investigates the design, implementation, and evaluation of a power-aware application placement controller in the context of an environment with heterogeneous virtualized server clusters, and presents the pMapper architecture and placement algorithms to solve one practical formulation of the problem: minimizing power subject to a fixed performance requirement.
Journal ArticleDOI

Power and performance management of virtualized computing environments via lookahead control

TL;DR: This work implements and validate a dynamic resource provisioning framework for virtualized server environments wherein the provisioning problem is posed as one of sequential optimization under uncertainty and solved using a lookahead control scheme.
References
More filters
Proceedings ArticleDOI

Managing energy and server resources in hosting centers

TL;DR: Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.
Journal ArticleDOI

Web search for a planet: The Google cluster architecture

TL;DR: Googless architecture features clusters of more than 15,000 commodity-class PCs with fault tolerant software that achieves superior performance at a fraction of the cost of a system built from fewer, but more expensive, high-end servers.
Proceedings ArticleDOI

Dynamic thermal management for high-performance microprocessors

TL;DR: This work investigates dynamic thermal management as a technique to control CPU power dissipation and explores the tradeoffs between several mechanisms for responding to periods of thermal trauma and the effects of hardware and software implementations.
Journal ArticleDOI

A workload characterization study of the 1998 World Cup Web site

TL;DR: It is found that improvements in the caching architecture of the World Wide Web are changing the workloads of Web servers, but major improvements to that architecture are still necessary.
Proceedings Article

Making scheduling cool: temperature-aware workload placement in data centers

TL;DR: This paper examines a theoretic thermodynamic formulation that uses information about steady state hot spots and cold spots in the data center and develops real-world scheduling algorithms, and develops an alternate approach to address the problem of heat management through temperature-aware workload placement.
Related Papers (5)