scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Performance of a mixed mode air handling unit for direct liquid-cooled servers

TL;DR: In this article, an experimental setup was designed and constructed which comprises a direct liquid-cooled server, rack-level cooling and compressor-free external cooling system, and the power usage effectiveness and the air handling unit (AHU) performance were investigated under different datacenter operation scenarios, and AHU configurations.
Abstract: Datacenter energy consumption constitutes a large portion of global energy consumption. Particularly, a large amount of this energy is consumed by the datacenter cooling system. Subsequently, many innovative cooling technologies have been developed to reduce energy consumption and increase cooling performance. In this work, an experimental setup was designed and constructed which comprises a direct liquid-cooled server, rack-level cooling and compressor-free external cooling system. This study tracks the heat generated from IT processes to the environment. In addition, the power usage effectiveness (PUE) and the air handling unit (AHU) performance are investigated. The objectives were studied under different datacenter operation scenarios, and AHU configurations.

Summary (2 min read)

1. Introduction

  • The increasing requirements of datacentre applications are driving the demand for high performance IT infrastructure, which in turn is increasing the heat dissipation and cooling load from IT units, leading to greater power consumption to maintain the datacentres in a safe operational condition.
  • The Power Usage Effectiveness (PUE) metric is widely used in datacentre energy-efficiency assessment.
  • The limitations of air cooling methods are leading to increased uptake of liquid cooling methods which are closer to the heat sources [3, 4, 7, 11-13].
  • This technique, which is based on water/air heat exchanger, is found to be significantly improving the cooling efficiency of datacentres by eliminating the hot spot problems and reduce the need for the CRAC units [14, 16].
  • It demonstrates the compromise that must be struck between increasing the IT efficiency and energy consumption.

2. Experimental Set-up

  • The experimental setup can be divided into two parts as shown in figure 1: IT environment side and outdoor heat rejection system side.
  • Thirty Sun Fire V20z servers from circa 2005 are used to represent the IT, also known as 3.1 IT environment side.
  • The cooling loops are Performance of a Mixed Mode Air Handling Unit for Direct Liquid Cooled Servers 33rd SEMI-THERM 1.
  • The water coming from the CHx enters in hot condition to the AHU where it rejects heat before being pumped back to the CHx, also known as Processing water loop.
  • Six spray nozzles are located upstream to the heat exchanger in a rack system.

3. Methodology

  • To evaluate the overall operational performance of the system many temperature, flow rate, pressure drop humidity, and power sensors were installed.
  • All of these sensors are logged and can be accessed via a central programmed panel.
  • The IT load is generated using stress under Linux.
  • In addition, the response of the AHU operation is monitored regarding the power consumption and cooling efficiency.
  • The procedure of calculating the effectiveness is called the number of transfer unit (NUT) which is defined as the ratio between the heat rejected (Q) to the maximum heat transfer (Qmax) in either side of the heat exchanger [20].

4. Results

  • Experiments are conducted with different datacentre operational load scenarios and different AHU configurations.
  • It is also found that the effectivness of the AHU heat exchanger is about 0.66 over the test period as shown in figure 7.
  • The power consumed by the pump is almost constant over the experiments period as the flow rate is kept constant of about 0.37l/s. AHU fan:.
  • This is the major part of power consumption in the cooling system.
  • 3rd hour- Idle operation 4th hour- 25% utilization.

5. Conclusion

  • This work highlights experimental results from a compresor free DCLC coled datacentre test facility.
  • The heat generated in the IT equipment was tracked from the sources to the heat rejection system.
  • The responds of the AHU to the stresses were invistigated.
  • It is shown that the power consumed by the AHU’s fan is the highes portion of the total power supplied to the AHU unit.
  • Thus, utilizing evaporative cooling by spraying the heat exchangers has been found to reduce the PUE of the datacentre and increase the cooling performance of the AHU.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

This is a repository copy of Performance of a mixed mode air handling unit for direct
liquid-cooled servers.
White Rose Research Online URL for this paper:
http://eprints.whiterose.ac.uk/118427/
Version: Accepted Version
Proceedings Paper:
Kadhim, MA, Al-Anii, YT orcid.org/0000-0002-4382-1998, Kapur, N
orcid.org/0000-0003-1041-8390 et al. (2 more authors) (2017) Performance of a mixed
mode air handling unit for direct liquid-cooled servers. In: 2017 33rd Thermal
Measurement, Modeling & Management Symposium (SEMI-THERM). Annual IEEE
Semiconductor Thermal Measurement and Management Symposium, 13-17 Mar 2017,
San Jose, CA, USA. IEEE , pp. 172-178. ISBN 9781538615317
https://doi.org/10.1109/SEMI-THERM.2017.7896926
© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be
obtained for all other uses, in any current or future media, including reprinting/republishing
this material for advertising or promotional purposes, creating new collective works, for
resale or redistribution to servers or lists, or reuse of any copyrighted component of this
work in other works.
eprints@whiterose.ac.uk
https://eprints.whiterose.ac.uk/
Reuse
Unless indicated otherwise, fulltext items are protected by copyright with all rights reserved. The copyright
exception in section 29 of the Copyright, Designs and Patents Act 1988 allows the making of a single copy
solely for the purpose of non-commercial research or private study within the limits of fair dealing. The
publisher or other rights-holder may allow further reproduction and re-use of this version - refer to the White
Rose Research Online record for this item. Where records identify the publisher as the copyright holder,
users can verify any specific terms of use on the publisher’s website.
Takedown
If you consider content in White Rose Research Online to be in breach of UK law, please notify us by
emailing eprints@whiterose.ac.uk including the URL of the record and the reason for the withdrawal request.

33
rd
SEMI-THERM Symposium
Performance of a Mixed Mode Air Handling Unit for Direct Liquid Cooled Servers
Mustafa A. Kadhim*
1,3
, Yaser T. Al-Anii
2,3
, Nikil Kapur
3
,
Jonathan L. Summers
3
, and Harvey M. Thompson
3
1
Mechanical Engineering Department, University of Babylon, IQ
2
Mechanical Engineering Department, University of Anbar, IQ
3
Institute of Thermofluids, School of Mechanical Engineering, University of Leeds, UK
Woodhouse Lane, Leeds LS2 9JT, United Kingdom
Email: ml13mak@leeds.ac.uk
Abstract
Datacentres energy consumption constitute a large portion
of the global power consumption. Particularly, a large amount
of this power is consumed by the datacentre cooling system.
Subsequently, many innovated cooling technologies have been
developed to reduce the energy consumption and increase the
cooling performance. In this work, an experimental setup was
designed and constructed which comprises a direct liquid-
cooled server, rack-level cooling and compressor-free external
cooling system. The study tracks the heat generated from IT
processes to the environment. In addition, the power utilization
effectiveness (PUE) and the air handling unit (AHU)
performance were investigated. These objectives were studied
under different datacentre operation scenarios, and AHU
configurations.
Keywords
Datacentre, evaporative cooling, compressor free cooling
unit design, direct liquid cooled servers.
1. Introduction
The increasing requirements of datacentre applications are
driving the demand for high performance IT infrastructure,
which in turn is increasing the heat dissipation and cooling load
from IT units, leading to greater power consumption to
maintain the datacentres in a safe operational condition. As a
result, the need for more effective, economic, environmental
and efficient cooling techniques has become critical.
The Power Usage Effectiveness (PUE) metric is widely
used in datacentre energy-efficiency assessment. It is defined
as the ratio of the total energy consumed of a datacentre to the
energy consumed by the IT equipment. Although the PUE
represents a useful metric to reflect energy consumption,
however, it represents effectiveness not efficiency. For that,
many datacentres use their own calculation of PUE for
marketing purposes [1]. The datacentre cooling power
consumption increases dramatically with higher PUE and
becomes higher as more servers being evolved [2].
Subsequently, finding higher and efficient cooling technique is
urgent to reduce PUE.
The traditional approach to cooling datacentres is to
circulate cold air over the IT equipment. The heat dissipated by
the servers is initially transferred to the passing cold air then
extracted to the computer room air conditioning (CRAC). This
heat is then exchanged with a refrigeration chiller plant which
is using refrigerant that condenses using cooling tower [3-5].
Such a cooling configuration in datacentre was found to be
consuming 50% of the required IT power consumption with a
high energy portion consumed by the CRAC unit and chiller
plant [5-7]. The major drawbacks of using this technique in
datacentre are the high value of PUE (PUE~2), air
contamination on the electronics and the required effective
arrangement of the IT racks to achieve efficient cooling [2, 8-
10]. However, the limitations of air cooling methods are
leading to increased uptake of liquid cooling methods which
are closer to the heat sources [3, 4, 7, 11-13].
Recently, two promising approaches have been developed
to improve the cooling and energy efficiencies. These
techniques based on hybrid cooling or fully liquid cooling. The
hybrid liquid cooling includes using a heat exchanger either at
the back of the server rack (which is called rear door heat
exchanger strategy) or at the top of the server rack above the
cooled aisle (which is called over heat exchanger) [14, 15]. This
technique, which is based on water/air heat exchanger, is found
to be significantly improving the cooling efficiency of
datacentres by eliminating the hot spot problems and reduce the
need for the CRAC units [14, 16].
The direct liquid cooling is based on bringing the heat
transfer liquid in direct contact with the heat generation source
(i.e CPUs). This method has demonstrated a very high energy
saving and cooling efficiency compared with the air cooled
based servers as the convective heat transfer using direct liquid
cooling is much higher which leads to a lower hot spot
temperatures and higher performance of transferring heat out
of the IT environment [10, 17]. It is also found that the power
required to transfer the heat to the environment is 45% less for
direct liquid cooled system than air cooled system [18].
This paper considers the rejection of heat from direct liquid
cooled servers through the use of an air handling unit that can
operate wet or dry. It demonstrates the compromise that must
be struck between increasing the IT efficiency and energy
consumption.
2. Experimental Set-up
The experimental setup can be divided into two parts as
shown in figure 1: IT environment side and outdoor heat
rejection system side.
3.1 IT environment side:
Thirty Sun Fire V20z servers from circa 2005 are used to
represent the IT. Each server has 2 x AMD Operton 64-bi
processors running Debian Ubuntu. Traditionally, these servers
were designed to be air cooled as shown in figure 2a. However,
in the present configuration, all heat sinks and fans are replaced
by direct contact liquid-cooled technology manufactured by
CoolIT as shown in figure 2b [19]. The thirty servers are fitted
in a single rack, as shown in figure 3. The cooling loops are

Performance of a Mixed Mode Air Handling Unit for Direct
Liquid Cooled Servers
33
rd
SEMI-THERM Symposium
joined through one passage making their way to the secondary
loop of a coolant heat exchanger (CHx).
The CHx is also provided by CoolIT and consists of layer
plate type heat exchanger, pumps, valves, fitting and sensors as
shown in figure 4 [19]. There are two pumps connected in
series on the secondary loop and programmed to operate at
different speeds according to the temperatures and load.
3.2 Outdoor heat rejection system side:
The primary loop of the CHx is connected to an air handling
unit (AHU). The AHU is designed to utilize spray evaporative
cooling to boost heat exchanger capacity. It consists of three
fluid loops as shown in figure 1: a processing water loop, an air
side loop and a spray water loop.
1. Processing water loop: the water coming from the CHx
enters in hot condition to the AHU where it rejects heat
before being pumped back to the CHx. A filter and a
pressure vessel is connected to the loop to prevent any
contaminations and regulate the loop pressure
respectively. A bypass loop is used to regulate the
process water flow rate and temperature that entering
to the AHU’s heat exchanger.
2. Airside loop: The rejected heat is carried by the passing
air through a tunnel to the external environment. The
AHU was designed as an open circuit wind tunnel of
1103 X 1197 
and consists of:
Inlet air section
Heat exchanger unit
Visualization section: perspex panels is used
to view into the HE section.
Spray water drain: is used to collect the over
sprayed water to be quantified and estimate
the evaporation rate.
Axial fan: provides variable air suction
pressure.
3. Spray Water loop:
Six spray nozzles are located upstream to the heat
exchanger in a rack system. These nozzles are used to
atomize the water to droplet diameter in the region of
15-30 microns. The spray water flow rate is measured
to define the performance. The spray is activated under
certain conditions of weather and datacentre
operational load.
Figure 1 Datacentre design layout

Performance of a Mixed Mode Air Handling Unit for Direct
Liquid Cooled Servers
33
rd
SEMI-THERM Symposium
Figure 3 Sun Fire servers rack represents datacentre
Figure 4 CHx from CoolIT
Figure 2 Sun Fire V20z servers. a) Air-cooled server. b) DCLC cooled server
Sun Fire
server
CHx

Performance of a Mixed Mode Air Handling Unit for Direct
Liquid Cooled Servers
33
rd
SEMI-THERM Symposium
3. Methodology
To evaluate the overall operational performance of the
system many temperature, flow rate, pressure drop humidity,
and power sensors were installed.
The servers contain temperature sensors to measure the
CPUs die temperature, rams temperature and the environment
temperature inside each server. The power consumption of a
selective servers were also measured using logged watts meter.
The power consumption and flow rate of the fans that used for
cooling the rams and the power supply were investigated
separately and fan characteristics curve was obtained.
The CHx contains a logged inlet and outlet temperature
sensors on both primary and secondary loop. It also contains
power consumption meter and flow meter. All of these sensors
are logged and can be accessed remotely to download the data.
The AHU contains temperature, flow rate, pressure and
humidity sensors as well as power meter (as shown in figure 1).
All of these sensors are logged and can be accessed via a central
programmed panel.
The IT load is generated using stress under Linux. Various
IT loads are created by scripts to simulate real datacentre
operation over certain periods. In addition, the response of the
AHU operation is monitored regarding the power
consumption and cooling efficiency. Furthermore, the
capability of the AHU is explored for various air and spray
flow rates.
The effectiveness of the heat exchangers can be obtained by
averaging the recorded value. The procedure of calculating the
effectiveness is called the number of transfer unit (NUT) which
is defined as the ratio between the heat rejected (Q) to the
maximum heat transfer (Qmax) in either side of the heat
exchanger [20].

 (1)
Where, Qmax calculated from the following equation:


󰇛


󰇜 (2)
Cmin is the minimum heat capacity when compare both fluids
in the heat exchanger.

 




(3)
4. Results
Experiments are conducted with different datacentre
operational load scenarios and different AHU configurations.
The key results of interest are the heat transfer calculations
from the CPUs to the heat rejection system. In particular, the
response of the AHU to any difference in IT load perturbations
is highlighted. Figure 5 shows a sample of the heat rejection in
the secondary and primary loops of the CHx, the heat arriving
to the heat exchanger of the AHU and the heat carried by the
air to the environment. The IT produces a total heat (Qs) which
is supplied to the secondary loop of the CHx. The CHx heat
exchanger carries the heat to the primary loop (Qp), which is
transferred to the AHU heat exchanger by water (Qw) and
finally carried away by the passing air (Qa) to the external
environment. The result was taken on conditions of auto mode
AHU and the severs operationof idle for 10 minuts then
stressed 100% utilization for 1 hour an then left for 1 hour idle
operation. It can be noted that the heat generated from the
servers increased instantiniously as the servers stressed, then
continue at specific value of about 4.25 kw with some
flaxtuations. It can also be seen that the heat generated drops
dramatically as soon as the strees finishes. However, the
primary loop is found to follow the stress sinario in a fast pace
with lower heat than the secondary loop. This behaiviour can
be due to the effectivness of the CHx heat exchanger wich is
found to be about 0.6 for all the test period as shown in figure
6. On the other hand, The heat arrived to the AHU shows a
noticable delay compared with the server stress sinarios. It can
be seen from figure 5 that the AHU needs more than 30 minuts
to disspate the heat after the end of the stress. It is also found
that the effectivness of the AHU heat exchanger is about 0.66
over the test period as shown in figure 7.
This time respond delay between the heat rejection sytem and
the IT equipment varies depending on the cooling system and
the distance between the datacentre and the heat rejection
system as well as the effectivness of the heat exchangers in the
loop. It is also neesarilly to consider the delay in the design of
the heat cooling system as it increases the hot spot temperatures
in the CPUs espially in the osilating loads on datacentres.
Figure 5 Heat transferred from the CHx to the AHU
The results explore the variation of PUE for the system
based on calculating the power consumption of the IT and the
total power consumed for cooling. This cooling power
consumption is divided into three different heat transfer parts.
Firstly, the server cooling level includes two main categories
which are the CPU and the RAM. The current configuration of
the servers only allows the CPUs to be cooled by the direct
contact liquid. However, the RAMs are cooled by air which is
pressurized by the server fans. The power consumption of the
fans in each server is found to be between 3.44 to 6.66 Watts
depending on the temperature of the RAM. This is calculated
using the fan speed-power curves for each operational scenario
Stress

Citations
More filters
Proceedings ArticleDOI
25 Mar 2019
TL;DR: This work proposes a thermal-aware workload mapping strategy considering the potential and limitations of a two-phase thermosyphon to further minimize hot spots and spatial thermal gradients and proposes a design and mapping strategy to decrease the chiller cooling power at least by 45%.
Abstract: The power density and, consequently, power hungriness of server processors is growing by the day. Traditional air cooling systems fail to cope with such high heat densities, whereas single-phase liquid-cooling still requires high mass flow-rate, high pumping power, and large facility size. On the contrary, in a micro-scale gravity-driven thermosyphon attached on top of a processor, the refrigerant, absorbing the heat, turns into a two-phase mixture. The vapor-liquid mixture exchanges heat with a coolant at the condenser side, turns back to liquid state, and descends thanks to gravity, eliminating the need for pumping power. However, similar to other cooling technologies, thermosyphon efficiency can considerably vary with respect to workload performance requirements and thermal profile, in addition to the platform features, such as packaging and die floorplan. In this work, we first address the workload- and platform-aware design of a two-phase thermosyphon. Then, we propose a thermal-aware workload mapping strategy considering the potential and limitations of a two-phase thermosyphon to further minimize hot spots and spatial thermal gradients. Our experiments, performed on an 8-core Intel Xeon E5 CPU reveal, on average, up to 10°C reduction in thermal hot spots, and 45% reduction in the maximum spatial thermal gradient on the die. Moreover, our design and mapping strategy are able to decrease the chiller cooling power at least by 45%.

4 citations


Cites methods from "Performance of a mixed mode air han..."

  • ...In particular, the framework developed by [6] shows that Direct Contact Liquid Cooling (DCLC) systems can reduce the PUE down to 1....

    [...]

Journal ArticleDOI
TL;DR: The flow distribution and central processing unit (CPU) temperatures inside a rack of thirty 1'U (single rack unit) Sun Fire V20z servers retrofitted with direct-to-chip liquid cooling and two cool...
Abstract: The flow distribution and central processing unit (CPU) temperatures inside a rack of thirty 1 U (single rack unit) Sun Fire V20z servers retrofitted with direct-to-chip liquid cooling and two cool...

4 citations

Proceedings ArticleDOI
03 Sep 2021
TL;DR: In this paper, the authors used supervised machine learning with linear regression and logistical regression approaches to investigate which methodology produces the best prediction results for adjusting the AHU unit fan speed for better control of the supplied data center temperature.
Abstract: An Increase in Data Center power requirements has placed significant pressure on traditional Data Center cooling management Systems. The temperature in the Data Center is controlled using Air handling units (AHUs) and plays a critical role in a Data Center to maintain the required temperature to ensure the best possible performance. As the targeted Data Center is quite Old and using backdated technologies and does not have sensor-based technologies implemented. One of the issues faced by the target Data Center was that AHU fan speed was set to the static setting which impacts the Supplied Temperature in Data Center and results in excessive hot & cold temperature inside a Data Center. The proposed model resolves the problem faced by the targeted Data Center to operate the AHU fan speed to maintain the required DC Temperature on the predicted range by using machine learning techniques. This model not only solves the problem of maintaining the necessary temperature in the Data Center, but it can also regulate the fan speed within the expected range, contributing to the Data Center's energy efficiency. Supervised machine learning with linear regression and logistical regression approaches are utilized to investigate which methodology produces the best prediction results for adjusting the AHU unit fan speed for better control of the supplied Data Center temperature. In the targeted Data Center, it has no scope to expand more rack space or host IT load. It is desired that the predicted or recommended range for controlling AHU fan speed be determined so that the needed temperature can be sustained with the suggested setting without requiring extensive manual task. Henceforth as the data generated by the Data Center is historical, supervised regression machine learning models using Linear and Logistic Regression techniques are used. Both regression models are compared to see which regression methodology predicts the best variable fan speed range for maintaining the data center's required temperature.

1 citations

References
More filters
Book
01 Jan 2003
TL;DR: In this paper, the authors introduce basic concepts of heat transfer, including thermal spreading and contact resistances, and forced convection and external flow. But they do not consider the effect of external flow on internal flow.
Abstract: Preface. Contributors. 1. Basic Concepts (Allan D. Kraus). 2. Thermophysical Properties of Fluids and Materials (R. T Jacobsen, E. W. Lemmon, S. G. Penoncello, Z. Shan, and N. T. Wright). 3. Conduction Heat Transfer (A. Aziz). 4. Thermal Spreading and Contact Resistances (M. M. Yovanovich and E. E. Marotta). 5. Forced Convection: Internal Flows (Adrian Bejan). 6. Forced Convection: External Flows (Yogendra Joshi and Wataru Nakayama). 7. Natural Convection (Yogesh Jaluria). 8. Thermal Radiation (Michael F. Modest). 9. Boiling (John R. Thome). 10. Condensation (M. A. Kedzierski, J. C. Chato, and T. J. Rabas). 11. Heat Exchangers (Allan D. Kraus). 12. Experimental Methods (Jose L. Lage). 13. Heat Transfer in Electronic Equipment (Avram Bar-Cohen, Abhay A. Watwe, and Ravi S. Prasher). 14. Heat Transfer Enhancement (R. M. Manglik). 15. Porous Media (Adrian Bejan). 16. Heat Pipes (Jay M. Ochterbeck). 17. Heat Transfer in Manufacturing and Materials Processing (Richard N. Smith, C. Haris Doumanidis, and Ranga Pitchumani). 18. Microscale Heat Transfer (Andrew N. Smith and Pamela M. Norris). 19. Direct Contact Heat Transfer (Robert F. Boehm). Author Index. Subject Index. About the CD-ROM.

1,368 citations


"Performance of a mixed mode air han..." refers background in this paper

  • ...defined as the ratio between the heat rejected (Q) to the maxImum heat that can be transfered (Qmax) in either side of the heat exchanger [20]....

    [...]

01 Jan 2006
TL;DR: In this article, the authors benchmarked 22 data center buildings and found that data centers can be over 40 times more energy intensive than conventional office buildings, and proposed a set of best-practice technologies for energy efficiency.
Abstract: Over the past few years, the authors benchmarked 22 data center buildings. From this effort, we have determined that data centers can be over 40 times as energy intensive as conventional office buildings. Studying the more efficient of these facilities enabled us to compile a set of “best-practice” technologies for energy efficiency. These best practices include: improved air management, emphasizing control and isolation of hot and cold air streams; rightsizing central plants and ventilation systems to operate efficiently both at inception and as the data center load increases over time; optimized central chiller plants, designed and controlled to maximize overall cooling plant efficiency, central air-handling units, in lieu of distributed units; “free cooling” from either air-side or water-side economizers; alternative humidity control, including elimination of control conflicts and the use of direct evaporative cooling; improved uninterruptible power supplies; high-efficiency computer power supplies; on-site generation combined with special chillers for cooling using the waste heat; direct liquid cooling of racks or computers; and lowering the standby losses of standby generation systems. Other benchmarking findings include power densities from 5 to nearly 100 Watts per square foot; though lower than originally predicted, these densities are growing. A 5:1 variation in cooling effectiveness index (ratio of cooling power to computer power) was found, as well as large variations in power distribution efficiency and overall center performance (ratio of computer power to total building power). These observed variations indicate the potential of energy savings achievable through the implementation of best practices in the design and operation of data centers.

251 citations

Journal ArticleDOI
TL;DR: In this paper, the authors compared the heat transfer characteristics of several cooling technologies with potential application in the server electronics industry and concluded that some form of liquid cooling is necessary in high performance computing applications.

164 citations


"Performance of a mixed mode air han..." refers background in this paper

  • ...However, the limitations of air cooling methods are leading to an increased uptake of liquid cooling methods, which brings compact and effective heat transfer loops closer to the heat sources [3, 4, 7, 11-13]....

    [...]

  • ...Such a cooling configuration in datacenters was discovered to be consuming 50% of the required IT power conswnption with a high energy portion consumed by the CRAC unit and chiller plant [5-7]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, a new Computational Fluid Dynamics (CFD) strategy is developed for data center scenarios where a liquid loop heat exchanger is attached at the rear of server racks (back doors), which can avoid the need to separate the cold and hot air streams in traditional hot/cold aisle arrangements.

147 citations


"Performance of a mixed mode air han..." refers background in this paper

  • ...However, the limitations of air cooling methods are leading to an increased uptake of liquid cooling methods, which brings compact and effective heat transfer loops closer to the heat sources [3, 4, 7, 11-13]....

    [...]

Journal ArticleDOI
TL;DR: A critical discussion on existing and emerging technologies for data center cooling systems was carried out and a critical analysis on next future technology solutions for obtaining high energy efficiency data center is performed.

145 citations

Frequently Asked Questions (1)
Q1. What contributions have the authors mentioned in the paper "Performance of a mixed mode air handling unit for direct liquid cooled servers" ?

In this work, an experimental setup was designed and constructed which comprises a direct liquidcooled server, rack-level cooling and compressor-free external cooling system. The study tracks the heat generated from IT processes to the environment. These objectives were studied under different datacentre operation scenarios, and AHU configurations.