Recent advancements on thermal management and evaluation for data centers

Recent advancements on thermal management and evaluation for data centers

Applied Thermal Engineering 142 (2018) 215–231 Contents lists available at ScienceDirect Applied Thermal Engineering journal homepage: www.elsevier...

3MB Sizes 0 Downloads 64 Views

Applied Thermal Engineering 142 (2018) 215–231

Contents lists available at ScienceDirect

Applied Thermal Engineering journal homepage: www.elsevier.com/locate/apthermeng

Recent advancements on thermal management and evaluation for data centers

T



Kai Zhang , Yiwen Zhang, Jinxiang Liu, Xiaofeng Niu College of Urban Construction, Nanjing Tech University, Nanjing 210009, China

H I GH L IG H T S

advancements on thermal management and evaluation for data center are reviewed. • Recent of thermal distribution on underfloor plenum and room space are summarized. • Effects in the technologies of liquid cooling and air cooling are analyzed. • Advances conservation solutions especially for free cooling and heat recovery are discussed. • Energy • Evaluation metrics concerning equipment safety and energy saving are introduced.

A R T I C LE I N FO

A B S T R A C T

Keywords: Data center Thermal management Energy conservation Evaluation metrics

As the hub of data storage, network communication, and computation station, data center has been inseparable from the development of each social sector. With the increasing demand on the data communication and calculation, the size and number of data centers are experiencing a dramatic expansion. As a critical sequel to the continuous deploying of data centers, the tremendous energy consumption has become a worldwide problem. Compared to the energy utilizing in office facilities, the energy demand per square meter has been increased by 100 times for data centers during recent years. To improve the energy utilizing efficiency of data centers, various design and operation method have been proposed. However, there are still short of statistical studies on the thermal management and evaluation of data centers and associated cooling systems. This paper presents a stateof-the-art review on the research and development in this field, covering the topics of thermal management techniques and associated evaluation metrics. In addition, energy conservation strategies and optimization methods account for the safety operating of data centers are also discussed in detail. Finally, guidelines on research and strategies for thermal management and evaluation in data centers are provided.

1. Introduction With the arrival of “Big Data” and “Internet of Things”, the data center equipped with various kinds of computers and servers have gradually penetrated each social sector [1–4]. As the hub of data storage, network communication, and computation station, the size and number of data centers are experiencing a dramatic expansion around the world [5–7]. Especially for the employing of high density servers, which lead to tremendous growth in energy consumption. According to [8], the energy demand per square meter of high density data center has been increased by 100 times compared to the energy utilizing in office facilities during recent years. Thus, the thermal management and evaluation is particularly important to ensure the safety operation of data centers [9–12], and is directly affected by the energy conservation



Corresponding author. E-mail address: [email protected] (K. Zhang).

https://doi.org/10.1016/j.applthermaleng.2018.07.004 Received 18 April 2018; Received in revised form 23 June 2018; Accepted 1 July 2018 Available online 03 July 2018 1359-4311/ © 2018 Elsevier Ltd. All rights reserved.

strategies of the cooling system and the thermal environment in data center [13–16]. Due to the increasing demands on the internet access requirement and data storage, the density of cooling load in data centers is also keeping growing. For a high-density data center with the annual electricity consumption of 10–20 MW, the heat rejection for each server rack is probably 2–20 kW [17,18]. To ensure the safety operation of data centers, the cooling systems face many challenges on the thermal management and energy conservation. Although the currently underfloor air distribution (UFAD) system can improve the overheat for the servers of data centers by increasing the cooling load, more cooling load results in a huge waste of energy and increased burden of power system [19–21]. Furthermore, the higher temperature in data center is always caused by the poor discharge of the hot air cycling deteriorating the indoor thermal environment [22]. In addition,

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

Nomenclature

ΔTinlet

Es Et n Q QIT

ΔTrack

Qtotal δQ Tmax-all Tmax-rec Tmin-all Tmin-rec Tret Tsup Tx ΔTequip

energy consumed by the servers (J) total electrical energy consumed by the data center (J) total number of rack intakes total heat dissipation from the racks in the data center (W) information technology equipment power consumption (W) total facility power consumption of data center (W) enthalpy rise of the cold air before entering the racks (W) max allowable supply air temperature (°C) max recommended supply air temperature (°C) minimum allowable supply air temperature (°C) minimum recommended supply air temperature (°C) return air temperature (°C) supply air temperature (°C) mean temperature at each rack intake (°C) temperature increase across IT equipment (°C)

temperature difference between CRAC supply air and rack inlets (°C) temperature rise through the server racks (°C)

Abbreviation BAL BP CRAC EUE NP PUE RCI RHI RTI SHI UFAD

balance ratio bypass ratio computer room air conditioning energy usage effectiveness negative pressure ratio power usage effectiveness rack cooling indices return heat index return temperature index supply heat index underfloor air distribution

impacted by the height of raised floor, open area of perforated tiles, and deployment of obstructions in the underfloor plenum. Another study focused on the pressure distribution in the underfloor plenum was conducted by Karki et al. [32] based on an idealized one-dimensional computational model. In their study, two dimensionless parameters (i.e., one related to the pressure variation in the plenum and the other to the frictional resistance) were proposed as the control parameters to the airflow distribution. With these dimensionless parameters, a comparable result of airflow distribution using proposed one-dimensional model was achieved compared to the existing three-dimensional model. Fulpagare et al. [34] analyzed the influences of the obstructions in the underfloor plenum, including the feeder lines, the main supply lines, the drain lines, the cable trays, and the blower openings of the computer room air conditioning (CRAC). The results showed that airflow rates were decreased by 80% through the underfloor plenum account for the obstructions. Thus, the deployment of the cable pipe in the underfloor plenum is critical to the thermal performance of raised floor data centers. Their study also suggested that the optimum location for deploying the cable pipes was the zone between the CRAC and the hot aisles in the underfloor plenum. The uneven distribution of airflow will be increased by a lower height of underfloor plenum, and a larger perforated tile [17]. To mitigate this problem, a variable opening perforated tile was proposed by Wang et al. [35], and then was employed in a modeled data center. Their simulation results indicated that the temperature distribution was significant improved without additional energy consumption by employing the variable opening perforated tiles, and the standard deviations of the flow rate for each perforated tile could be reduced by 86.3% compared to that with consistent opening areas design.

the appropriate evaluation metrics, and advanced controlling and predictive strategy are extremely essential for achieving the two critical tasks in data centers (i.e., thermal management for equipment safety and energy conservation for sustainable development). Thus, a complete statistical study on the thermal management and evaluation of data centers and associated cooling systems is of great significance for the development of data center. This paper summarizes the development of data centers and associated cooling systems. In order to evaluate and improve the performance of data center, a state-of-the-art review is conducted to indicate the recent advancements on the thermal management and evaluation methods. This study is presented in three sections. In the first section, recent research on thermal management strategies in data centers are introduced, including thermal enhancement in the underfloor plenum and the cold/hot aisle in the room space. The advanced cooling solutions are also discussed in this section. In the second section, the energy conservation techniques for data centers are presented (e.g., free cooling and heat recovery). Furthermore, the existing control and prediction methods for improving the energy efficiencies are also reviewed. In the third section, currently used thermal evaluation metrics for data centers are introduced, especially for the metrics proposed recently. This study aims to provide guidelines on the central issues of thermal management and new strategies for advanced cooling, energy conservation, and evaluation methods in data centers, which will be helpful to design the data center as well as to improve the cooling performance of air conditioning system in the data centers. 2. Thermal management techniques 2.1. Airflow in the underfloor plenum Thermal management is proposed to achieve the satisfied thermal environment or temperature distribution in the data centers, which is impacted by many factors [23,24]. To improve the cooling performance, UFAD system is always configured with cold and hot aisles in the data center [25–27]. As shown in Fig. 1, the cooled air is delivered from the diffusers mounted on the raised floor through the underfloor plenum to the cold aisle, and then is vented from the hot aisle after cooling the servers in the racks [28,29]. As the first flow channel, the underfloor plenum has a great effect on the temperature distribution in the room space [30,31]. Patankar [17] pointed out that a well distributed airflow in the underfloor plenum could prevent the mixing of cold air and hot air, and the hot spots in the room space. Their study also indicated that the field of pressure in the underfloor plenum was the primary influence factor on the flow field. And it was mainly

Fig. 1. Sketch of hot aisle and cold aisle based raised floor data center [33]. 216

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

deployed in multiple racks when the existing racks cannot handle it. This process is called load spreading. However, the conventional approach of load spreading only paid attention on the power and heat capacity limitation of each rack, and the air distribution through the servers was not considered. Siriwardana et al. [49] proposed a novel technique for deploying the increased heating load based on a CFD heat-flow model applying particle swarm optimization (PSO), in which the effects of the increased heating load on the temperature distribution could be reduced as much as possible. The proposed models not only considered the influence of the increased heating load, but also the impact on the airflow distribution. The experiments performed in an operational data center also indicated that the PSO based model could achieve a better thermal management than the conventional load spreading technique. Although the UFAD system is commonly employed in data centers, there still have several alternative air distribution configurations applied in data centers. Cho et al. [50] simulated the air distribution systems equipped with 6 different flooded (i.e., air flows without pipes and ducts) and locally ducted air distribution using CFD method. The results showed that the best thermal environment was achieved by the combination of overhead locally ducted supply and locally ducted return among the 6 different configurations. However, the air temperature at bottom of racks was about 3.9 °C higher than that of UFAD systems account for the airflow of overhead distribution system cannot reach the lower part of server racks.

2.2. Thermal environment in the room space The primary purpose of thermal management for data center is to keep the temperatures in room space and racks lower than the upper limits required in the standard with the lowest energy consumption [36]. To achieve this objective, the influences of airflow organization, including recirculation air mixing, bypass air mixing, and negative pressure, should be considered carefully during the process of thermal design [17,37,38]. To avoid the air mixing, separation between the hot and cold air had been demonstrated as an effective way by many researchers [12,39,40]. Thus, several kinds of air separation, including the cold aisle containment, blocking gaps between the racks, above floor partitions, drop ceiling vents/ducts, and other combination designs were applied in the data centers [17,41]. Fakhim et al. [41] investigated the effects of several improvable designs on the cooling system of an operational data center with computational fluid dynamics (CFD) simulations, including the configurations of cold-aisle containment, blocking empty rack spaces, additional ceiling vents, and ceiling vents with duct. Their results showed that applying a combination design of 100 cm ceiling return ducts and cold-aisle containment associated with the best cooling performance compared with other proposed designs. Nada et al. [42,43] used a reduced-scale physical model with cold aisles containment to prevent air mixing near racks. The cooling performance was analyzed based on three kind deployments of cold aisles, including free open cold aisle, semi-enclosed cold aisles, and full enclosed cold aisles (Fig. 2). The results showed that full enclosed cold aisles given the best thermal performance and the inlet temperature of racks could be reduced by 13–15.5% regarding to the range of the power density. Although the enclosed cold aisles can improve the cooling performance of data center, it is not suitable for all data centers [44–46]. For the data centers with uneven distribution of racks and/or servers, airflow organization becomes particularly important. As shown in Fig. 3, Wang et al. [35] modeled a drawer-type rack based data center with an increase-sized hot aisle zone and a reduce-sized the cold aisle zone. Compared to the data center with general configured cold and hot aisles, the inlet temperature of racks was decreased by 13.3 °C, and thus the recirculation and bypass air mixing was also improved significantly. He et al. [47] presented a method of temperature rise distribution (TRD) to evaluate the air recirculation efficiency in data centers. Two dimensionless parameters, self-to-other recirculation ratio (θsto,p) and other-to-self recirculation ratio (θots,p), were proposed to characterize the air recirculation. The results showed that the maximum and averaged temperature rise in the racks could be decreased by sacrificing the heat recirculation rate. The upgrading or increasing of computing equipment also had a great effect on the thermal environment in the room space of data center account for the varied distribution of heating load [7,48]. To resolve this problem, the increasing heating loads are commonly

2.3. Optimization on the thermal environment 2.3.1. Improvement on the structure of air cooling system Several advanced air cooling solutions have been proposed to avoid the local hot spots phenomenon in server racks [51–53]. Kwon [54] investigated the adaptability of different ventilation methods with outside air cooling systems in data centers. The energy utilizing efficiencies of these ventilation methods were evaluated. Their conclusions indicated that the displacement ventilation could prevent local hot spots and temperature rise around the servers, and the mean temperature of server racks was kept under 35 °C. The non-uniform vertical airflow distribution also results in the local hot spots [10,55]. To resolve this problem, a new proposed cooling system called “In-row” was integrated with the UFAD system by Priyadumkol et al. [56], which was shown in Fig. 4. Their analysis based on the CFD method showed that the integrated system had the advantages of both two systems which resulting in a better thermal distribution at both the top and the bottom of racks compared to the data centers with UFAD system lonely. The data center with fan-assisted perforated tiles (Fig. 5) was modeled with CFD method by Song [57], and then the mal-distribution of vertical airflow in server racks was investigated. Considering the effects of flow straightening and the fan distance from the perforated tile, the full factorial design method was also proposed to improve the

Fig. 2. Different deployments of cold aisles [42]: (a) free open configuration; (b) semi-enclosed; (c) full enclosed. 217

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

Fig. 3. Drawer-type rack based data center [35]: (a) traditional rack arrangement; (b) drawer-type rack arrangement; (c) schematic of the drawer design.

from the bottom to the top of server racks. And then the cooling performance was improved account for the more uniform distribution of airflow. Another optimizing method to improve the vertical distribution of temperature and airflow was proposed by Zhang et al. [22]. In their study, the UFAD system based data center was equipped with the solar chimney (Fig. 6), and the performance of three configurations of solar chimney were compared. The simulation results demonstrated that the application of solar chimney in raised floor data center was an effective way to achieve a better cooling for racks. Especially for the temperature in upper zone of cold aisle, which could be decreased by 13 °C when the solar chimney was configured above the cold aisle. Considering the uneven thermal distribution in the server racks, a new inner-cooled racks based UFAD data center cooling system was also proposed by Zhang et al. [58]. As shown in Fig. 7, the cooling air could be directly delivered to cool the server racks at different level through the air duct integrated door (i.e., component of 11 in Fig. 7). The active flow curve (AFC) method was a newly proposed methodology for characterizing airflow organization in IT equipment of all airflow regions [59]. Alissa et al. [60] studied the effects of airflow imbalance on the reliability and utilization of the open compute (OC) storage data center from different levels (i.e., server and aisle levels),

Fig. 4. Data center with “In-row” air conditioning system [56].

CFD model. The results showed that a more vertical fan tile flow towards the server inlets, and a larger distance between the fan unit heads and the perforations could reduce the uneven distribution of airflow

Fig. 5. Fan-assisted perforated tiles based data center air conditioning [57]. 218

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

Fig. 6. Solar chimney based data center air conditioning [22].

1. air source heat pump; 2. pump; 3. electrically operated valve; 4. air handling unit; 5. supply air duct; 6. inlet of supply air; 7. outlet of return air; 8. return air duct; 9. room; 10. rack; 11. supply air inlet of rack; 12. floor slab; 13. raised floor; 14. underfloor plenum; 15. supply air duct of rack. Fig. 7. Sketch of inner-cooled racks based UFAD data center [58].

driven by the blower at the back of servers, and then the circulating air was cooled by the air/water heat exchanger at the bottom of the cabinet. The authors found that air leakage had a significant effect on the thermal environment in the cabinet which will cause the mixing of cold and hot air. The cooling system failure scenarios of the water pump failure and the blower failure were also conducted in their study. Their results showed that the water pump failure would cause obvious increased supply air temperature and overheat in servers. However, for

and the sketch of the rack test chamber was shown in Fig. 8. In their study, the models for rack and aisle levels were developed based on the AFC method and experimental characterization data. The results indicated that the adverse effects of the chiller failure and high economizer temperatures caused by the airflow imbalance could be improved by the proposed modeling approach. Nemati et al. [61] investigated the thermal performance of a fully contained hybrid server cabinet, and the sketch was shown in Fig. 9. The airflow circulation in the cabinet was 219

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

Fig. 8. Sketch of the rack level configuration [60]: (a) rack level test chamber – side view, (b) open rack (OR) model, (c) detailed and compact models and (d) storage/server IT deployment.

centers, it is not effective enough account for the low density and the low heat removal capacity. The heat removal capacity of air is only around 37% of water in data centers [62]. However, the liquid cooling is a more effective cooling solution for data centers, which will not only improve the cooling efficiency, but also be avoid of poor air organization. In addition, the liquid cooling can be deployed as direct way or indirect way in data centers. For the direct liquid cooling solution, the electronic components in the server are directly contacted with the liquid coolants through a cold plate attached to the CPU. The heat transfer process consists of two steps [63]: (1) the sink-to-air heat transfer process; and (2) air-tochilled-water heat transfer process. The significant advantage of this technology is the reduction of the thermal resistances between heat source and cold source and its good adaptation of cooling solutions. Kheirabadi et al. [64] studied the thermal performance of the cold plate for three designed channels, including straight channels, a serpentine channel, and mixed straight-serpentine channels (Fig. 10), Their results showed that the design of straight channels was proved to be the most effective design among the three designs account for the impacts of flowrates and heat loads of the electronic components. And it only required 0.34 W for pump power when the flowrate and the maximum CPU temperature were 3.4 L/min and 68 °C. Three factors, including the CPU utilization, the coolant set point temperature, and the server type, were considered by Alkharabsheh et al. [65] to investigate the impact of direct liquid cooling system failure on CPU. It was found that the first two factors have greater influence than the latter one during the failure. Furthermore, the CPU temperature and power utilization were determined by the CPU frequency throttling mechanism. The authors

Fig. 9. Illustration of the airflow organization in a fully contained hybrid server cabinet [61]: (a) top view; (b) side view.

the blower failure, the supply air temperatures were within the recommended range while the CPU had overheated for 250 s. Thus, it is necessary to monitor both two types of cooling system failure to avoid the misjudgment of server operation health. 2.3.2. Advances in liquid cooling technology Although air is the most common heat transfer medium in data 220

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

Fig. 10. Sketch of three channel designs of the cold plate: (from up to bottom) straight channels, a serpentine channel, and mixed straight-serpentine channels [64].

system leakages, which would lead to the server temperature rise in about 10–16 min.

recommended to deploy a backup cold plate in case of the operating direct liquid cooling system failure and to study load migration technique to avoid the failure happens. Compared with the direct liquid cooling, the indirect cooling technique exhausts the heat by attaching a liquid-cooled door on the back of a rack. It also means that the direct cooling is always on the chip level and the indirect cooling is commonly configured on the rack level. By applying a liquid-cooled door, it will decrease the requirement to separate hot and cold air in data centers [63]. Almoli et al. [66] introduced a liquid cooling technique for rack cooling by the way of removing the waste heat more effectively. In their study, a liquid looped heat exchanger was attached at the rear of racks to prevent the hot and cold air mixing. The results of CFD simulation showed that the rack cooled by the liquid loop could reduce the cooling load of CRAC units for UFAD systems combined with cold and hot aisles. Gao et al. [67] modeled a data center with the liquid-air heat exchanger based racks. The liquid-air heat exchanger was configured at the back door of racks. Both the steady and transient state of thermal performance were analyzed in their research. The results showed that the dynamic performance was improved significantly as the information technology (IT) equipment load varied in a big range or the CRAC accidentally shut down, which could ensure the safety operation of servers in the racks. Nemati et al. [68] studied the performance of an operating rear door liquid-air heat exchangers during an air blower failure. The results showed that the blower failure would cause the negative pressure of −25 to −30 Pa, and the IT equipment would draw airflow from the

2.3.3. Energy analysis method for the optimization The exergy destruction and entransy dissipation have been widely applied for thermal analysis for data centers [38,69,70]. An exergybased analysis tool was developed by McAllister et al. [70] to evaluate the thermal performance of data center. In their developed tool, the exergy destruction was used as the indicator of the inefficient airflow in data centers. Their study showed that this tool was more efficient and accurate in thermal management of data centers comparing to CFD simulations. Qian et al. [38] investigated the effect of air mixing on the cooling air distributions in data center. The thermal resistance and associate index, including air mixing index (AMI), air distribution index (ADI), and integrated heat transfer index (IHTI), were proposed based on the entransy dissipation in their study. These indexes could be used to evaluate the performance of air mixing and the integrated heat transfer. Their analysis found that a small thermal resistance indicated a better thermal performance, and thus the bypass air mixing had a significant effect on data center airflow organization. Another study conducted by Qian et al. [69] showed that exergy analysis was not always the effective way to evaluate the thermal performance account for the minimum exergy loss per unit cooling capacity could not considered as the optimal system parameters. However, the entransy dissipation was more reasonable for optimizing the thermal performance in the data center compared to the exergy analysis. 221

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

3. Potential solutions for energy conservation

to a higher efficient energy utilizing of data centers. As an important influence factor of air-side free cooling, the climate must be considered carefully for employing air-side economizer in data centers. To achieve the objective of energy saving, the air-side free cooling system and associate operating schedule should be designed according to the climate zone and the local weather condition [79]. The weather data from twenty weather stations in Australia was adopted to analyze the energy efficiency for employing air-side economizers in data center by Siriwardana et al. [80]. The results showed that energy saving could be only achieved in dry and cold climate zones in Australia. Similar analysis conducted by Ham et al. [81] was implemented in South Korea. Three types of air-side economizers were simulated for the data center application, and the climate data from sixteen locations in South Korea were selected for the parametric research of supply air temperature and heat exchanger effectiveness on the optimization of energy utilizing. Their results showed that the optimum supply air temperature of this study felt into a range of 18–23 °C for all sixteen locations. Lee et al. [82] investigated the effect of different climate zones on the energy utilizing performance of air-side free cooling in data centers. The comparison for the selected locations in seventeen climate zones worldwide showed that the energy saving potential for the data centers located in cool climate zones was higher than that in dry or humid climate zones. The authors also indicated that air-side free cooling for data center was not suitable for all the zones in the world.

The thermal environment is critical for the safety operating of data center [71,72]. To realize a better thermal environment, the existing problems of lack in cooling capacity and poor distribution of temperature and airflow must be resolved [73]. Furthermore, the power consumption of air-conditioning system in data center often accounts for a large proportion of the total energy consumption [74,75]. Therefore, various solutions, especially for the free cooling and heat recovery strategies, have been proposed during the past decade years. 3.1. Free cooling Free cooling utilizes natural cold source for energy saving and recovery, which is categorized into three forms for data center application, including air-side free cooling, water-side free cooling, and heat pipe free cooling [76]. 3.1.1. Air-side free cooling Air-side free cooling is commonly provided through the air-side economizer. Ham et al. [77] analyzed the annual cooling energy for the data center equipped with nine kinds of air-side economizers, respectively. They found that the cooling load was decreased by 76–99% regarding to the types of air-side economizers. To further study the effect of air-side economizer on the energy utilizing efficiency in data centers, Ham et al. [78] characterized the thermal performance of the computer room air handler (CRAH) with a simplified model. The results showed that the increasing of cooling energy consumption was only accounted for the energy consumption of the fan in the air-side economizer when supply air temperature of CRAH was higher than 19 °C, which indicated that the increasing of supply air temperature was not always regarded

3.1.2. Water-side free cooling Water-side free cooling technique achieves the free cooling process by utilizing natural cold source through a cooling water facility [76]. A kind of water-side free cooling using water-side economizer (Fig. 11), which utilizes liquid cooling technique, is demonstrated as an effective component for achieving energy conservation in data centers. A liquid

Fig. 11. Sketch of integrated water-side economizer plant [84]. 222

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

and the water consumption of economizer could also be decreased. The energy utilizing performance can be further improved by employing the water-side economizer in the data centers with cold aisle containments [85–87]. Ham et al. [84] compared the annual cooling energy consumption for a water-side economizer based data center with

cooled chiller-less data center test facility consisted of warm water cooling servers and liquid-side economizer was proposed by Gao et al. [83], and the detailed dynamic responses of the cooling components were investigated for the proposed system. The experiments showed that the cooling energy consumption could be reduced by 5% or more,

1. air source heat pump; 2. pump; 3. electrically operated valve; 4. air handling unit; 5. supply air duct; 6. inlet of supply air; 7. outlet of return air; 8. return air duct; 9. room; 10. phase-change energy storage based rectifying device; 11. diffuser; 12. floor slab; 13. underfloor plenum; 14. raised floor; 15. air valve; 16-17. wall. (a) Air source [93]

1. air source heat pump; 2. pump; 3-4. electrically operated valve; 5. air handling unit; 6. supply air duct; 7. inlet of supply air; 8. outlet of return air; 9. return air duct; 10. room; 11. phase-change energy storage based rectifying device; 12. insulated material reel; 13. railway; 14. diffuser; 15. floor slab; 16. raised floor; 17. underfloor plenum; 18-19. wall. (b) Water looped [94] Fig. 12. Sketch of UFAD system combined with energy storage based on phase change material (see above-mentioned references for further information). 223

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

The supercomputer system named “Aquasar”, using the hot water cooling to achieve energy saving, was adopted to evaluate the heat recovery potential by Zimmermann et al. [97]. Their investigation indicated that the cooling used hot water for “Aquasar” could be used for building heating (e.g., heating radiator and radiant floor heating) and absorption refrigeration after cooled the “Aquasar”. Haywood et al. [98] suggested that the collected waste heat from data centers could be used to drive the absorption chiller, and then the cold water generated by the absorption chiller could be used to provide the supplemental cooling to data center. According to the exergy analysis, the waste heat generated by the servers could satisfy the requirement of absorption chiller operation. Furthermore, the PUE of the proposed system was closed to 1, which could be further improved by combining with solar collectors.

cold aisle containment and free open cold aisle, separately. The results showed that although the supply air temperature for free open cold aisle configuration was about 8 °C lower than that of cold aisle containment configuration, the air distribution was over-supplied. Furthermore, the free open cold aisle configuration resulted in bypass and recirculation air mixing. Thus, the energy conservation of the enclosed cold aisle configuration for the water-side economizer based data center was greater even though considering air leakage. 3.1.3. Heat pipe free cooling The heat gain produced by server racks can be removed by heat pipe at a small temperature difference without external energy [76], in which the energy performance can be improved significantly. The data center cooled by heat pipe was proposed by Wang et al. [88], and then the experiments of thermal balance were carried out to evaluate thermal performance for five climate zones in China. The results showed that the energy performance was different account for the various climate zones and seasons. In particular, the power usage effectiveness (PUE) of the data center was 0.3 lower than those cooled by vapor compression systems for cold zones in China. The PUE was defined as a ratio of the total power of data center to that consumed by IT equipment with the benchmark of 2 and the ideal value of 1 [89]. Thus, the energy efficiency of heat pipe cooled data center in their study is better than data center cooled by vapor compression systems. Tian et al. [90] modeled a internally cooled rack, in which a two-stage heat pipe loops was integrated with a water-side economizer in the high density data centers. The simulations showed that this cooling system not only improved thermal management, but also reduced the energy consumption by about 46% compared to the evaporative cooling data centers.

3.3. Control and prediction techniques on system operation Advanced control and prediction strategies have been employed to improve the performance of the thermal management for data center and associated cooling system operating [99–101]. For the blade server data center, the heat rejection is highly sensitive to the current leakage of central processing unit (CPU). However, the control algorithms for server’s fans are only related to the operating status of CPU, and it cannot be aware of the current leakage. Thus, the existing fan speed control algorithms cannot satisfy the requirement of optimal temperature and velocity of supply air. The proportional-integral-derivative (PID) algorithm was used to control the fan speed account for the CPU current leakage by Durand-Estebe et al. [102], and then a CFD model was established based on the proposed control method. The CPU current leakage and overall material electric consumption was combined into the model. The simulations showed that the optimal supply air temperature of CRAC was around 24–25 °C in this study. To achieve the real-time control and prediction on the energy utilizing, the artificial neural networks (ANN) and genetic algorithm (GA) were integrated with thermal management system by Song et al. [103]. Compared to a fully CFD-based prediction method, the operating of thermal management system based on the ANN-GA algorithm would reduce the total computational time quite a lot under the similar precision with CFD simulations. Lin et al. [104] developed a real-time transient thermal model to predict the power failure and associated temperature rise near racks. The solutions to control the thermal environment during the power failure, including placing backup cooling equipment, shortening restart time, maintaining adequate reserve cooling capacity, and employing thermal storage, were recommended in their study. The compact model (i.e., ANN method and proper orthogonal decomposition method) had been used to predict thermal environments due to its advantages of reduced computation time for safety operating of data center [105–107]. However, the prediction of parameter values and geometrical configurations with compact model was inaccurate for the high density data center [108]. Considering the data center configured with cold and hot aisles, the zonal method was proposed to offset the defect of compact models [109,110]. The results showed that the compact model assisted with the zonal method could achieve a good accuracy in a reduced computation period. Furthermore, the proposed model also resulted in an effective prediction on thermal performance of data centers [33,108,111]. Bana et al. [112] developed a predictive modeling method for data centers based on their commercial software 6SigmaDC. The proposed ACE performance score was integrated into the modeling study, which was a visualizing evaluation method of data center performance. It contains three factors: availability, physical capacity and cooling efficiency, and the change of each item will impact the other two factors. By using a triangle graph of ACE performance score (Fig. 13), it allowed the owners and designers of data centers to observe the ACE performance gap between the actual and ideal performance of data centers.

3.1.4. Other free cooling techniques As an alternative free cooling technique, the thermosyphon heat exchanger was adopted by Feng et al. [91] to provide the cooling energy to the data center using natural cold source (i.e., outdoor cool air). The characteristics of heat dissipation as well as the energy consumption were analyzed in their study. Compared to a typical data center with vapor compression refrigeration, the annual energy consumption for the thermosyphon based data center was decreased by 35.4%. However, the capability of energy saving for the thermosyphon based data center was varied with the seasons and climate zones. Another study focused on the thermosyphon based data center was conducted by Zhang et al. [92]. An integrated system of mechanical refrigeration and thermosyphon (ISMT) was modeled in their study, and the experiments showed that the application of ISMT resulted in an annual energy saving of 5.4–47.3% compared to the vapor compression cooling systems when the indoor temperature was 27 °C. Zhang et al. [93] proposed a phase change material (PCM) based air distributer (i.e., components of 10 in Fig. 12(a) and 11 in Fig. 12(b)), and then the air distributer was integrated in the underfloor plenum of UFAD system. As shown in Fig. 12, the cold energy could be stored in the PCM based air distributer at nighttime, and then released through air distributer at daytime. The energy saving could be achieved account for the temperature difference between nighttime and daytime (i.e., air source), and the energy utilizing also improved by peak load shifting (i.e., water looped). 3.2. Heat recovery and utilizing Although the energy grade of the waste heat from data centers is low and difficult to utilize [95], the huge waste heat produced by the servers attracts more interest of capturing and reusing it. Lu et al. [96] investigated the possibility of heat recovery from an operating data center located in Finland. Due to the extreme cold climate in Finland, the heat recovery system was probably to provide yearly space heating and hot water heating for a large scale non-domestic building. 224

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

problems of huge energy consumption and poor distribution of temperature and airflow must be resolved [73]. Although the techniques for energy conversion and improving the thermal distribution are mentioned above, it is more important to evaluate the energy utilizing efficiency for the application of these thermal management strategies. A proper evaluation metric can not only be used to judge the advantage and disadvantage of the thermal environment in the data center, but also specify the direction of energy conversion for the data center cooling system. 4.2. Alternative metrics For the data center, the supply heat index (SHI), return heat index (RHI), rack cooling indices (RCI), and return temperature index (RTI) are commonly used to evaluate the energy utilizing efficiency. And the power efficiency can be assessed by PUE [114,115]. The dimensionless parameters of SHI and RHI were proposed by Sharma et al. [116], which could be used to indicate the thermal environment for the largescale data centers. The SHI and RHI were derived from the numerous CFD studies for a real data center, which could be expressed as [116],

Fig. 13. Visualization of the ACE performance score [112].

By establishing a data center model in 6SigmaDC, the integrated tool would calculate the ACE score of the model automatically. The owners or designers could improve the model performance account for the ACE score. One case of the predictive modeling has saved about 10 million dollars and reduced 15% of PUE by solving the problem of air recirculation, airflow delivery, and supply air temperature.

SHI =

δQ Q + δQ

(1)

4. Thermal evaluation metrics

RHI =

Q Q + δQ

(2)

4.1. Significance of thermal evaluation

where Q is the total heat dissipation from all the racks in the data center; and δQ is the enthalpy rise of the cold air before entering the racks. And then the relationship between SHI and RHI is given by [116],

As described above, the characteristics of high heat and uneven distribution in data center will deteriorate the thermal environment, which requires cooling system running in cold mode throughout the year [113]. The objective of the cooling system in data center is different from the air conditioning in residential and commercial buildings as it is aimed to avoid the high temperature inside the racks as well as local hot spots to ensure the safety operating of the computation equipment. To realize a better thermal environment, the existing

SHI + RHI = 1

(3)

According to [1], the slight mixing of hot and cold air associated with a higher cooling efficiency was produced when SHI was closed to 0 or RHI was closed to 1. The RCI was adopted by Herrlin et al. [117] to evaluate the rack

Fig. 14. Vertical aisle partition system [37]: (a) system A: typical underfloor air cooling system; (b) system B: underfloor air cooling with aisle partition system. 225

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

efficiency of a small data center was evaluated with 25 parameters over a period of six weeks real-time monitoring. A wireless sensor network was established for the operational data center cooling system, and the measured data was compared with SHI, RHI, RCI, RTI, and PUE. Their research found that the server racks were overcooled for over 25% when the SHI and RHI were 16% and 74%, respectively. It demonstrated the potential for improving thermal improvement by lower the cooling load. The authors also suggested that the thermal performance should be evaluated with metrics for data center level and server rack level simultaneously so that the overall performance of data center cooling system could be indicated more accurately. The data center energy performance metric (DCEPM), defined as the ratio of the useful work output (i.e., server utilization) to the total energy supporting the CPU work, were adopted by Beitelmal et al. [121] to evaluate the energy performances of servers and data centers. In their study, the server and the data center were defined as local scale thermodynamic system and global scale thermodynamic system, respectively. The simulations were performed for both local scale and global scale according to the experimental results. The simulations showed that the energy consumption was increased with the rising computational load, and the power efficiency calculated by the metrics for global scale was lower than that for local scale as the idle servers in the room space caused a reduction on overall performance.

cooling efficiency and the health of thermal environment for the specified racks deployment. The RCIHI and RCILO were given by [117],

∑ (Tx −Tmax − rec )Tx > Tmax − rec ⎤ RCIHI = ⎡1− × 100% ⎢ ⎥ (Tmax − all−Tmax − rec ) n ⎣ ⎦

(4)

∑ (Tmin − rec−Tx )Tx < Tmin − rec ⎤ × 100% RCILO = ⎡1− ⎥ ⎢ (Tmin − rec−Tmin − all ) n ⎦ ⎣

(5)

where RCIHI and RCILO are the thermal health at the high end and the low end of temperature range; Tx is mean temperature at each rack intake; n is total number of intakes; Tmax-rec is maximum recommended temperature according to some guideline or standard; Tmin-rec is minimum recommended temperature according to some guideline or standard; Tmax-all is maximum allowable temperature according to some guideline or standard; and Tmin-all is minimum allowable temperature according to some guideline or standard. The health has been used to indicate the thermal environment of a data center, and can be quantified from a range of RCIHI and RCILO [117]. The higher RCI indicates a healthier (i.e., better) thermal environment in server racks and a more effective cooling system. The RTI was also adopted by Herrlin et al. [118] to analyze the airflow distribution near the racks, and the bypass airflow or recirculation airflow would be produced when the RTI was higher than 100% or lower than 100%, respectively. The RTI was given by [118],

RTI =

Tret −Tsup ΔTequip

× 100%

4.3. Limitation of current metrics Although the existing evaluation metrics play an important role in describing the thermal performance and offering the optimization direction for data centers, they still have limitations in practice. Capozzoli et al. [122] reviewed the most commonly used thermal indices in data centers, including SHI, RHI, RCI, RTI, β index, negative pressure (NP), and bypass pressure (BP). A critical analysis on the characteristics for each metric was discussed, which could be considered as a reference tool for utilizing the energy performance metrics in data centers. Yuventi et al. [123] discussed the insufficiency of PUE on evaluating energy consumption. in which the PUE was considered as an indicator of the minimum possible energy use rather than the energy consumption over a period (e.g., one year). A new energy-based metric of energy usage effectiveness (EUE) was also proposed in their study. The EUE could be taken as an adjustment of PUE, which made a better understanding of the energy efficiency in the data center and guided the development of energy rating/ranking systems and energy codes. The EUE was given by [123],

(6)

where Tret is return air temperature; Tsup is supply air temperature; and ΔTequip is the temperature increase across IT equipment. The PUE, proposed by the Green Grid [89], has become the most commonly used metric for evaluating the energy efficiency of data centers. As mentioned in Section 3.1.3, the PUE is a ratio with a benchmark of 2, and the better energy efficiency happened when the PUE is close to 1. The PUE was defined as [89],

PUE =

Qtotal QIT

(7)

where Qtotal is total facility power consumption of data center; and QIT is IT equipment power consumption. The evaluation metrics mentioned above has been applied to investigate the thermal performance and energy efficiency of data center during the recent years. Cho et al. [18] compared the thermal performance in 46 different CFD models of data centers using the calculated metrics of SHI, RHI, RCI, and RTI. Their research showed that 18 °C was the most suitable supply temperature for the UFAD system in the data centers, and it could be raised to 22 °C if the cold aisle was enclosed. The authors also suggested that overall thermal performance evaluation combined with individual parameters research in data center would be more and more important, and RTI and SHI/RHI could be considered as the overall performance metrics. Another study of performance metric focused on the vertical aisle partition system (Fig. 14) in high-density data centers was performed by Cho et al. [37]. In their study, the metrics of RCI and RTI were used to evaluate the performance of the proposed aisle partition system with CFD models. The results showed that the performance metrics of RCI and RTI could be considered as an indicator for analyzing the performance of different cooling solutions and optimization on the design of cooling system in data center. Cho et al. [119] also addressed the necessity of the methodology using PUE to estimate the energy performance for data centers. Due to the uniqueness of data center cooling systems, an energy performance evaluation program called DCeET was developed. The results showed that the energy simulation deviation between DCeET and TRNSYS was lower than 5%. Another study focused on the overall performance metrics has been conducted by Lajevardi et al. [120], in which the energy and cooling

EUE =

Et Es

(8)

where Et is the total electrical energy consumed by the data center during a period; and Es is the energy consumed by the servers during a period. Brady et al. [124] investigated the relationship between the varying IT loads and the PUE in an operating data center located in the United States. Their study concluded that PUE was not always reliable for the evaluation due to some energy saving measures (e.g., virtualization) could increase the value of PUE, and then it could make an error for evaluating. Furthermore, the PUE had not included the efficiency of the servers in their required operation. Thus, a metric incorporated the energy efficiency of all equipments and infrastructures (i.e., server units) would be more effective for the energy evaluation on the data center cooling system. Other metrics focused on airflow organization and temperature distribution were also addressed in [113,125]. The β index proposed by Schmidt et al. [113] was adopted as an evaluation metric of temperature for local rack inlets. The β index was defined as [113],

β = 226

ΔTinlet ΔTrack

(9)

227

Equipment safety operation

Equipment safety operation

Equipment safety operation

Equipment safety operation

Energy conservation

Energy conservation Equipment safety operation

Equipment safety operation

Equipment safety operation

Equipment safety operation

Equipment safety operation

SHI

RHI

RCI

RTI

PUE

EUE

NP

BP

R

BAL

β

Objective

Metric

Q Q + δQ

δQ Q + δQ

uf C Tsup − Tsup C − T uf Tret sup

BAL =

S − TS Tout in C − TC Tret sup

S − TC Tout ret S − T uf Tout sup S − T uf Tin sup uf S Tout − Tsup

BP =

R=

Et Es

Qtotal QIT

Tret − Tsup ΔTequip

ΔTinlet ΔTrack

NP =

β =

EUE =

PUE =

RTI =

× 100%

× 100%

∑ (Tmin − rec − Tx )Tx < Tmin − rec ⎤ (Tmin − rec − Tmin − all) n



× 100%

∑ (Tx − Tmax − rec )Tx > Tmax − rec ⎤ (Tmax − all − Tmax − rec ) n ⎦

RCILO = ⎡1− ⎣

RCIHI = ⎡1− ⎣

RHI =

SHI =

Models

Table 1 Summary of current evaluation metrics for data centers.

recirculation air

and Quantitative • Qualitative description on bypass and

server racks

1-R (1 - BP) × (1 + NP)

Ideal condition at BAL = 1 • BAL • =

are averaged value • Temperatures • Ignore local performance • Same to NP • Same to NP • Same to NP

• Short of energy saving limit • Limited on macro evaluation

consumption over a period

minimum possible energy • Predict use rather than energy

focus on local thermal • Little performance

[125]

[125]

[125]

[125]

[113]

[123]

[89]

[118]

[117]

• Ignore the effect of heat source

allowable and • Include recommended temperature

[116]

Ref.

[116]

local temperature of rack • Lack of server equipment • Short reliability

Disadvantages

evaluation on bypass air for • Macro • Same to SHI CRAC

based on enthalpy • Analysis evaluation on air • Macro recirculation

Advantages

consumption ratio of total facility to out potential of electricity • Power • Figure IT equipment saving of 2; PUE closed to 1 indicates • Benchmark higher energy efficiency for PUE energy efficiency of a • Improvement • Describe data center in detail of energy consumption of servers • Indicator evaluation for local racks both air recirculation and • Airflow • Consider overheat air recirculation at β =0, and overheat • No in racks of β > 1 on local thermal • Focus performance of server racks on negative pressure (i.e., air detailed temperature • Evaluation • Describe infiltration into plenum) distribution for various locations or heights of rack inlets air infiltration at NP = 0 • No on bypass air for CRAC Same to NP • Evaluation • • No bypass air at BP = 0 on air recirculation into racks • Evaluation • Same to NP • No recirculation air mixing at R = 0 on balance between real air • Evaluation • Same to NP distribution and required air distribution of

airflow)

the effect on supply air • Indicate distribution = 100 (ideal); RTI < 100% (bypass • RTI airflow); RTI > 100% (recirculation

enthalpy rise in cold aisle • Quantify performance increases with • Cooling decreasing enthalpy rise closed to 0 indicates higher cooling • SHI efficiency heat extraction by CRAC • Quantify return air back to CRAC associated • More with higher cooling performance closed to 1 indicates higher cooling • RHI efficiency + RHI = 1 • SHI equipment health • Quantify • RCI closed to 1 means healthier equipment

Significances

K. Zhang et al.

Applied Thermal Engineering 142 (2018) 215–231

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

where ΔTinlet is the temperature difference between CRAC supply air and rack inlets; and ΔTrack is the temperature rise through the server racks. The β index is different for various location of each rack account for the different height of rack inlets in server racks. The β index ranges from 0 to 1, and no air recirculation and overheat in racks are derived while β is equal to 0 and higher than 1, respectively. The metrics of negative pressure ratio (NP), bypass ratio (BP), recirculation ratio (R), and balance ratio (BAL) for airflow management evaluation in data centers were introduced by Tozer et al. [125]. The NP, BP and R can be used to indicate the effect of negative pressure and air mixing. And the effect of negative pressure and air mixing is smaller while the metrics is closed to 0. The BAL is determined by the balance between the real air distribution and the required air distribution in the server racks. The BAL is equal to 1 under ideal condition (i.e., inlet and outlet temperatures of CRAC are equal to the temperatures at inlets and outlet of rack respectively). The equations of these metrics are as follows [125],

NP =

BP =

R=

S uf Tout −Tsup

(11)

uf TinS−Tsup S uf Tout −Tsup

BAL =

(12)

S Tout −TinS C C Tret −Tsup

(13)

where Tuf sup is the supply air temperature of underfloor plenum; TC sup is the supply air temperature of CRAC; TC ret is the return air temperature of CRAC; TS out is the outlet temperature of server rack; and TS in is the inlet temperature of server rack. As an overview on the current evaluation metrics for data center, all of the metrics discussed in this section are also summarized in Table 1. 5. Optimization strategy for thermal management Two objectives of the optimization on the cooling system in data center are as: (1) ensure the safety operation of server equipment; and (2) decrease the energy consumption of cooling system. Both of two concerns are emphasized and reviewed in this study, as well as the

uf C Tsup −Tsup C uf Tret −Tsup

S C Tout −Tret

(10)

Variable perforated tile

Aisle containment

Load deployment

Fan-assisted terminal

Drawer-type rack

Liquid loop cooling

Solar chimney

In-row air conditioning

Inner-cooled rack

Initial design parameters

Solutions Upgrade local cooling load

Establish numerical models Optimized models

Yes

CFD simulation No

Influence factors

Equipment Safety

Satisfiy the requirement

Control and prediction Objective

Height of raised floor

Non-uniform vertical airflow

Open area of perforation

Hot/cold air mixing

Pressure variation in plenum

Local hot spot

Obstructions in plenum

Negative pressure

Influence factors Improved parameters

Performance evaluation

Componets

Cooling System

SHI

RHI

RTI

RCI

PUE

EUE

ȕ

NP

BP

R

Underfloor air distribution CRAC & Cold source

Consequence

Energy Consumption

Hot/cold aisle configuration

Influence factors

Influence factors

Proportion of total energy usage

BAL

Climate zone

Weather condition

Solutions Air-side free cooling

Water-side free cooling

Heat recovery

Fig. 15. Strategy of thermal management and energy conservation for data centers. 228

Heat pipe Cold energy storage

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

focused on the local thermal performance of server racks, cold and hot aisles, underfloor plenum will contribute to the evaluation and prediction as well as control for the thermal management of data center. Both of global performance with macro methods and local performance with micro methods should be considered simultaneously, and the metrics for overall judgement of data center cooling systems are urgently needed to appear.

evaluation methods are also detailed discussed. To provide a guideline on determining the design of data center and associated cooling system as well as improving the efficiency of thermal management, the flow chart of optimization strategy on thermal management and energy conservation for data centers is shown in Fig. 15. As described in Fig. 15, both of equipment safety and energy consumption should be considered during the design period. And then the general process to optimize the thermal performance of data center is concluded as: (1) determination on the design parameters; (2) system modeling and simulation; (3) improve system configuration with evaluation of metrics; and (4) system optimization account for the advanced techniques on equipment safety operation and energy saving of cooling system. Although the influence factors on thermal management techniques and energy conservation solutions for data center and associate cooling system are discussed in this study, more attention should be paid on cooling solutions inside the server equipment (i.e., liquid cooling of CPU module), deployment of incremental heat source, and airflow distribution in the underfloor plenum in future studies.

Acknowledgement This work is supported by grants from the National Natural Science Foundation of China (No. 51406076) and the Natural Science Foundation of Jiangsu Province (No. BK20140942). References [1] Y. Fulpagare, A. Bhargav, Advances in data center thermal management, Renew. Sustain. Energy Rev. 43 (2015) 981–996. [2] M.N. Rahman, A. Esmailpour, A hybrid data center architecture for big data, Big Data Res. 3 (2016) 29–40. [3] J. Shuja, A. Gani, S. Shamshirband, R.W. Ahmad, K. Bilal, Sustainable cloud data centers: a survey of enabling techniques and technologies, Renew. Sustain. Energy Rev. 62 (2016) 195–214. [4] M. Uddin, Y. Darabidarabkhani, A. Shah, J. Memon, Evaluating power efficient algorithms for efficiency and carbon emissions in cloud data centers: a review, Renew. Sustain. Energy Rev. 51 (2015) 1553–1563. [5] S. Alkharabsheh, J. Fernandes, B. Gebrehiwot, D. Agonafer, K. Ghose, A. Ortega, et al., A brief overview of recent developments in thermal management in data centers, J. Electron. Packag. 137 (2015) 040801–40819. [6] K. Cho, H. Chang, Y. Jung, Y. Yoon, Economic analysis of data center cooling strategies, Sustain. Cities Soc. 31 (2017) 234–243. [7] H. Rong, H. Zhang, S. Xiao, C. Li, C. Hu, Optimizing energy consumption for data centers, Renew. Sustain. Energy Rev. 58 (2016) 674–691. [8] E. Oró, V. Depoorter, A. Garcia, J. Salom, Energy efficiency and renewable energy integration in data centres. Strategies and modelling review, Renew. Sustain. Energy Rev. 42 (2015) 429–445. [9] V. Mulay, S. Karajgikar, D. Agonafer, R. Schmidt, M. Iyengar, J. Nigen, Computational study of hybrid cooling solution for thermal management of data centers, in: ASME 2007 InterPACK Conference, vol. 1, 2007, pp. 723–731. [10] S.A. Nada, A.M.A. Attia, K.E. Elfeky, Experimental study of solving thermal heterogeneity problem of data center servers, Appl. Therm. Eng. 109 (2016) 466–474. [11] S.A. Nada, M.A. Said, Effect of CRAC units layout on thermal management of data center, Appl. Therm. Eng. 118 (2017) 339–344. [12] S.A. Nada, M.A. Said, M.A. Rady, CFD investigations of data centers’ thermal performance for different configurations of CRACs units and aisles separation, Alex. Eng. J. 55 (2016) 959–971. [13] Z. Li, J. Ge, C. Li, H. Yang, H. Hu, B. Luo, et al., Energy cost minimization with job security guarantee in Internet data center, Future Generation Comput. Syst. 73 (2017) 63–78. [14] J.A. Matteson, A. Vallury, B. Medlin, Maximizing data center energy efficiency by utilizing new thermal management and acoustic control methodology, in: ASME 2013 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, vol. 2, 2013, V002T09A11. [15] Z. Song, X. Zhang, C. Eriksson, Data center energy and cost saving evaluation, Energy Proc. 75 (2015) 1255–1260. [16] X. Zhang, T. Lindberg, N. Xiong, V. Vyatkin, A. Mousavi, Cooling energy consumption investigation of data center IT room with vertical placed server, Energy Proc. 105 (2017) 2047–2052. [17] S.V. Patankar, Airflow and cooling in a data center, J. Heat Transf. 132 (7) (2010) 073001–73017. [18] J. Cho, J. Yang, W. Park, Evaluation of air distribution system's airflow performance for cooling energy savings in high-density data centers, Energy Build. 68 (2014) 270–279. [19] ASHRAE, Thermal Guidelines for Data Processing Environments. TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, ASHRAE Inc, 2004. [20] MOHURD, AQSIQ, Code for Design of Electronic Information System Room, GB 50174-2008, Planning press, Beijing, China, 2008. [21] S. Schiavon, K.H. Lee, F. Bauman, T. Webster, Simplified calculation method for design cooling loads in underfloor air distribution (UFAD) systems, Energy Build. 43 (2011) 517–528. [22] K. Zhang, X. Zhang, S. Li, G. Wang, Numerical study on the thermal environment of UFAD system with solar chimney for the data center, Energy Proc. 48 (2014) 1047–1054. [23] S.A. Nada, K.E. Elfeky, A.M.A. Attia, W.G. Alshaer, Experimental parametric study of servers cooling management in data centers buildings, Heat Mass Transf. 53 (6) (2017) 2083–2097. [24] S. Nagarathinam, B. Fakhim, M. Behnia, S. Armfield, A comparison of parametric and multivariable optimization techniques in a raised-floor data center, J. Electron. Packag. 135 (2013) 030905–30908. [25] K. Zhang, X. Zhang, S. Li, Simplified model for desired airflow rate in underfloor

6. Conclusions A detailed state-of-the-art review of recent publications on the thermal management and evaluation for data centers is pursued in this study. The effects of airflow in underfloor plenum, thermal distribution in room space and racks, and advanced optimizing solutions on the thermal management are presented. The conclusions are as follows:

• For the traditional data center of UFAD with cold and hot aisle,







thermal management techniques are mainly focused on the underfloor plenum and room space. It has been clarified that the most important factor of thermal management for data centers is the airflow organization under the raised floor. Researchers and engineers should consider the physical and design parameters in the underfloor plenum first, and improvements for air mixing, negative pressure, and hot spots in the room space are the effective measures for system optimization. The optimization methods for thermal management in the room space can be divided into advanced air cooling solutions and liquid cooling techniques. For both air cooling and liquid cooling, the primary goal is to eliminate hot spots in the servers. The liquid cooling is more suitable for reducing local temperature in IT equipment account for the more effective heat removal capacity of water. Thus, new cooling strategies combining the overall and local cooling should be developed. For the overall cooling, advanced air cooling solutions plays an important role in optimizing the thermal environment at room level or aisle level. As the auxiliary methods of overall cooling, the local cooling based on liquid cooling technology should focus on the hot spots in the servers and develop precise airflow improvement facilities or cooling equipment at rack level or chip level. Energy conservation methods in data centers, including free cooling and heat recovery, have been proposed in recent years. The free cooling includes air-side free cooling, water-side free cooling, and heat pipe free cooling. The climate zone has a significant effect on the performance of air-side free cooling, and the cold climate and high-altitude zones are welcomed. In addition, the huge waste heat generated by IT equipment should be noticed. The data centers located in cold climate and high-altitude zones can achieve more benefits by combining the free cooling and heat recovery, in which the recovered heat energy can be used to heat the room space. The overall evaluation metrics should be developed. As the assistance for data center cooling system, both macro and micro scale evaluation methods are illustrated. For the macro scale, the evaluation metrics can be used to assess the thermal efficiency of the cooling system and help to improve the configuration of data center. For the micro scale, the numerical models and advanced algorithms 229

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

systems for energy conservation in data center, Energy Build. 68 (2014) 580–586. [57] Z. Song, Thermal performance of a contained data center with fan-assisted perforations, Appl. Therm. Eng. 102 (2016) 1175–1184. [58] K. Zhang, X. Zhang, S. Li, A novel data center air-conditioning system based on underfloor air distribution, 2013. [59] H.A. Alissa, K. Nemati, B.G. Sammakia, K. Schneebeli, R.R. Schmidt, M.J. Seymour, Chip to facility ramifications of containment solution on IT airflow and uptime, IEEE Trans. Compon. Packag. Manuf. Technol. 6 (2016) 67–78. [60] H.A. Alissa, K. Nemati, B.G. Sammakia, M.J. Seymour, R. Tipton, D. Mendo, et al., Chip to chiller experimental cooling failure analysis of data centers: the interaction between IT and facility, IEEE Trans. Compon. Packag. Manuf. Technol. 6 (2016) 1361–1378. [61] K.A. Nemati, H. Alissa, B.T. Murray, B.G. Sammakia, R. Tipton, M.J. Seymour, Comprehensive experimental and computational analysis of a fully contained hybrid server cabinet, J. Heat Transf. 139 (2017) 082101. [62] A. Habibi Khalaj, S.K. Halgamuge, A review on efficient thermal management of air- and liquid-cooled data centers: from chip to the cooling system, Appl. Energy 205 (2017) 1165–1188. [63] C. Nadjahi, H. Louahlia, S. Lemasson, A review of thermal management and innovative cooling strategies for data center, Sustain. Comput. Inf. Syst. 19 (2018) 14–28. [64] A.C. Kheirabadi, D. Groulx, Experimental evaluation of a thermal contact liquid cooling system for server electronics, in: 9th World Conference on Experimental Heat Transfer, Fluid Mechanics and Thermodynamics, 2017. [65] S. Alkharabsheh, U.L.N. Puvvadi, B. Ramakrishnan, K. Ghose, B. Sammakia, Failure analysis of direct liquid cooling system in data centers, J. Electron. Packag. 140 (2018) 020902-1–8. [66] A. Almoli, A. Thompson, N. Kapur, J. Summers, H. Thompson, G. Hannah, Computational fluid dynamic investigation of liquid rack cooling in data centres, Appl. Energy 89 (2012) 150–155. [67] T. Gao, B. Sammakia, E. Samadiani, R. Schmidt, Steady state and transient experimentally validated analysis of hybrid data centers, J. Electron. Packag. (2015). [68] K. Nemati, H.A. Alissa, B.T. Murray, K. Schneebeli, B. Sammakia, Experimental failure analysis of a rear door heat exchanger with localized containment, IEEE Trans. Compon. Packag. Manuf. Technol. 7 (2017) 882–892. [69] X. Qian, Z. Li, Z. Li, Entransy and exergy analyses of airflow organization in data centers, Int. J. Heat Mass Transf. 81 (2015) 252–259. [70] S. McAllister, V.P. Carey, A. Shah, C. Bash, C. Patel, Strategies for effective use of exergy-based modeling of data center thermal management systems, Microelectron. J. 39 (2008) 1023–1029. [71] V. Sundaralingam, S. Isaacs, P. Kumar, Y. Joshi, Modeling thermal mass of a data center validated with actual data due to chiller failure, in: ASME 2011 International Mechanical Engineering Congress and Exposition, IMECE 2011, November 11, 2011–November 17, 2011, American Society of Mechanical Engineers (ASME), Denver, CO, United States, 2011, pp. 169–175. [72] R. Zhou, Z. Wang, C.E. Bash, T. Cader, A. McReynolds, Failure resistant data center cooling control through model-based thermal zone mapping, HP Laboratories Technical Report, 2012. [73] K.C. Karki, S.V. Patankar, A. Radmehr, Techniques for controlling airflow distribution in raised-floor data centers, in: ASME 2003 International Electronic Packaging Technical Conference and Exhibition, 2003, pp. 621–628. [74] Y. Pan, R. Yin, Z. Huang, Energy modeling of two office buildings with data center for green building design, Energy Build. 40 (2008) 1145–1152. [75] H.S. Sun, S.E. Lee, Case study of data centers’ energy performance, Energy Build. 38 (2006) 522–533. [76] H. Zhang, S. Shao, H. Xu, H. Zou, C. Tian, Free cooling of data centers: a review, Renew. Sustain. Energy Rev. 35 (2014) 171–182. [77] S.-W. Ham, M.-H. Kim, B.-N. Choi, J.-W. Jeong, Energy saving potential of various air-side economizers in a modular data center, Appl. Energy 138 (2015) 258–275. [78] S.-W. Ham, M.-H. Kim, B.-N. Choi, J.-W. Jeong, Simplified server model to simulate data center cooling energy consumption, Energy Build. 86 (2015) 328–339. [79] V. Depoorter, E. Oró, J. Salom, The location as an energy efficiency and renewable energy supply measure for data centres in Europe, Appl. Energy 140 (2015) 338–349. [80] J. Siriwardana, S. Jayasekara, S.K. Halgamuge, Potential of air-side economizers for data center cooling: a case study for key Australian cities, Appl. Energy 104 (2013) 207–219. [81] S.-W. Ham, J.-S. Park, J.-W. Jeong, Optimum supply air temperature ranges of various air-side economizers in a modular data center, Appl. Therm. Eng. 77 (2015) 163–179. [82] K.-P. Lee, H.-L. Chen, Analysis of energy saving potential of air-side free cooling for data centers in worldwide climate zones, Energy Build. 64 (2013) 103–112. [83] T. Gao, M. David, J. Geer, R. Schmidt, B. Sammakia, Experimental and numerical dynamic investigation of an energy efficient liquid cooled chiller-less data center test facility, Energy Build. 91 (2015) 83–96. [84] S.-W. Ham, J.-W. Jeong, Impact of aisle containment on energy performance of a data center when using an integrated water-side economizer, Appl. Therm. Eng. 105 (2016) 372–384. [85] M.P. David, M.K. Iyengar, P. Parida, R.E. Simons, M. Schultz, M. Gaynes, et al., Impact of operating conditions on a chiller-less data center test facility with liquid cooled servers, in: 13th InterSociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, ITherm 2012, May 30, 2012–June 1, 2012, IEEE Computer Society, San Diego, CA, United States, 2012, pp. 562–573. [86] M. Iyengar, M. David, P. Parida, V. Kamath, B. Kochuparambil, D. Graybill, et al.,

air distribution (UFAD) systems, Appl. Therm. Eng. 93 (2016) 244–250. [26] K. Zhang, X. Zhang, S. Li, X. Jin, Review of underfloor air distribution technology, Energy Build. 85 (2014) 180–186. [27] K. Zhang, X.S. Zhang, S.H. Li, X. Jin, Experimental parametric study on the temperature distribution of an underfloor air distribution (UFAD) system with grille diffusers, Indoor Built Environ. 25 (2016) 748–757. [28] Y.-Z. Ling, X.-S. Zhang, K. Zhang, X. Jin, On the characteristics of airflow through the perforated tiles for raised-floor data centers, J. Build. Eng. 10 (2017) 60–68. [29] K. Zhang, X. Zhang, S. Li, Optimization on airflow distribution in data room airconditioning system with underfloor air distribution, J. Southeast Univ. Nat. Sci. Ed. 46 (2016) 62–69. [30] K. Zhang, X. Zhang, S. Li, Thermal decay in supply air plenum of underfloor air distribution system, J. Southeast Univ. Nat. Sci. Ed. 45 (2015) 720–727. [31] K. Zhang, X. Zhang, S. Li, X. Jin, Experimental study on the characteristics of supply air for UFAD system with perforated tiles, Energy Build. 80 (2014) 1–6. [32] K.C. Karki, S.V. Patankar, Airflow distribution through perforated tiles in raisedfloor data centers, Build. Environ. 41 (2006) 734–744. [33] Z. Song, B.T. Murray, B. Sammakia, Long-term transient thermal analysis using compact models for data center applications, Int. J. Heat Mass Transf. 71 (2014) 69–78. [34] Y. Fulpagare, G. Mahamuni, A. Bhargav, Effect of plenum chamber obstructions on data center performance, Appl. Therm. Eng. 80 (2015) 187–195. [35] I.N. Wang, Y.-Y. Tsui, C.-C. Wang, Improvements of airflow distribution in a container data center, Energy Proc. 75 (2015) 1819–1824. [36] K. Fouladi, A.P. Wemhoff, L. Silva-Llanca, K. Abbasi, A. Ortega, Optimization of data center cooling efficiency using reduced order flow modeling within a flow network modeling approach, Appl. Therm. Eng. 124 (2017) 929–939. [37] J. Cho, B.S. Kim, Evaluation of air management system's thermal performance for superior cooling efficiency in high-density data centers, Energy Build. 43 (2011) 2145–2155. [38] X. Qian, Z. Li, Z. Li, A thermal environmental analysis method for data centers, Int. J. Heat Mass Transf. 62 (2013) 579–585. [39] M. Tatchell-Evans, N. Kapur, J. Summers, H. Thompson, D. Oldham, An experimental and theoretical investigation of the extent of bypass air within data centres employing aisle containment, and its impact on power consumption, Appl. Energy 186 (2017) 457–469. [40] V. Sundaralingam, V.K. Arghode, Y. Joshi, W. Phelps, Experimental characterization of various cold aisle containment configurations for data centers, J. Electron. Packag. 137 (2014) 011007–11008. [41] B. Fakhim, M. Behnia, S.W. Armfield, N. Srinarayana, Cooling solutions in an operational data centre: a case study, Appl. Therm. Eng. 31 (2011) 2279–2291. [42] S.A. Nada, K.E. Elfeky, Experimental investigations of thermal managements solutions in data centers buildings for different arrangements of cold aisles containments, J. Build. Eng. 5 (2016) 41–49. [43] S.A. Nada, K.E. Elfeky, A.M.A. Attia, Experimental investigations of air conditioning solutions in high power density data centers using a scaled physical model, Int. J. Refrig. 63 (2016) 87–99. [44] W.A. Abdelmaksoud, T.Q. Dang, H. Ezzat Khalifa, R.R. Schmidt, Improved computational fluid dynamics model for open-aisle air-cooled data center simulations, J. Electron. Packag. 135 (2013) 030901–30913. [45] D.W. Demetriou, Khalifa H. Ezzat, Thermally aware, energy-based load placement in open-aisle, air-cooled data centers, J. Electron. Packag. 135 (2013) 030906–30914. [46] Z. Song, Studying the fan-assisted cooling using the Taguchi approach in open and closed data centers, Int. J. Heat Mass Transf. 111 (2017) 593–601. [47] Z. He, Z. He, X. Zhang, Z. Li, Study of hot air recirculation and thermal management in data centers by using temperature rise distribution, Build. Simul. 9 (2016) 541–550. [48] K. Zhu, Z. Cui, Y. Wang, H. Li, X. Zhang, C. Franke, Estimating the maximum energy-saving potential based on IT load and IT load shifting, Energy 138 (2017) 902–909. [49] J. Siriwardana, S.K. Halgamuge, T. Scherer, W. Schott, Minimizing the thermal impact of computing equipment upgrades in data centers, Energy Build. 50 (2012) 81–92. [50] J. Cho, T. Lim, B.S. Kim, Measurements and predictions of the air distribution systems in high compute density (Internet) data centers, Energy Build. 41 (2009) 1107–1115. [51] X. Chen, J. Zhang, Air conditioning system design and air distribution optimization for the date centre room, Fluid Mach. 42 (2014) 79. [52] C. Onyiorah, R. Eiland, D. Agonafer, R. Schmidt, Effectiveness of rack-level containment in removing data center hot-spots, in: 14th InterSociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, ITherm 2014, May 27, 2014–May 30, 2014, Institute of Electrical and Electronics Engineers Inc., Orlando, FL, United States, 2014, pp. 798–806. [53] S.K. Shrivastava, M. Iyengar, B.G. Sammakia, R. Schmidt, J.W. VanGilder, Experimental-numerical comparison for a high-density data center: Hot spot heat fluxes in excess of 500 W/ft2, IEEE Trans. Compon. Packag. Technol. 32 (2009) 166–172. [54] Y.-I. Kwon, A study on the evaluation of ventilation system suitable for outside air cooling applied in large data center for energy conservation, J. Mech. Sci. Technol. 30 (2016) 2319–2324. [55] M. Lloyd, L. Glicksman, Unique airflow visualization techniques for the design and validation of above-plenum data center CFD models, in: 2011 ASHRAE Winter Conference, January 29, 2011–February 2, 2011. PART 1 ed., Amer. Soc. Heating, Ref. Air-Conditioning Eng. Inc., Las Vegas, NV, United States, 2011, pp. 206–211. [56] J. Priyadumkol, C. Kittichaikarn, Application of the combined air-conditioning

230

Applied Thermal Engineering 142 (2018) 215–231

K. Zhang et al.

[87]

[88] [89] [90] [91] [92]

[93] [94] [95]

[96]

[97]

[98]

[99]

[100]

[101] [102] [103]

[104]

[105]

[106]

[107] X. Zhang, J.W. Vangilder, C.M. Healey, Z.R. Sheffer, Compact modeling of data center air containment systems, in: ASME 2013 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2013, July 16, 2013–July 18, 2013, American Society of Mechanical Engineers (ASME), Burlingame, CA, United States, 2013, p. V002T09A26. [108] Z. Song, B.T. Murray, B. Sammakia, Numerical investigation of inter-zonal boundary conditions for data center thermal analysis, Int. J. Heat Mass Transf. 68 (2014) 649–658. [109] Z. Song, B.T. Murray, B. Sammakia, A compact thermal model for data center analysis using the zonal method, Numerical Heat Transf.; Part A: Appl. 64 (2013) 361–377. [110] Z. Song, B.T. Murray, B. Sammakia, Improved zonal model for data center analysis, in: ASME 2013 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, InterPACK 2013, July 16, 2013–July 18, 2013, American Society of Mechanical Engineers (ASME), Burlingame, CA, United States, 2013, p. V002T09A1. [111] Z. Song, B.T. Murray, B. Sammakia, A dynamic compact thermal model for data center analysis and control using the zonal method and artificial neural networks, Appl. Therm. Eng. 62 (2014) 48–57. [112] M. Bana, A. Docca, S. Davies, An ACE performance assessment case study; from compromised to optimized: one data center: $10 million Saved. White Paper, Future Facilities Ltd, 2014. [113] R.R. Schmidt, E.E. Cruz, M. Iyengar, Challenges of data center thermal management, IBM J. Res. Dev. 49 (2005) 709–723. [114] M.J. Seymour, M.K. Herrlin, Data center optimization using performance metrics, in: ASME 2015 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems collocated with the ASME 2015 13th International Conference on Nanochannels, Microchannels, and Minichannels, vol. 1, 2015, p. V001T09A22. [115] M. Xie, J. Liu, K. Zhang, X. Niu, B. Zhou, Rack exergy-loss index for analyzing and evaluating the thermal environment of data center, in: 8th Asian Conference on Refrigeration and Air-Conditioning, ACRA 2016, May 15, 2016–May 17, 2016. No. 12, Sec 1, Zhong Xiao East Road, Taipei, Taiwan: Asian Conference on Refrigeration and Air Conditioning (ACRA), 2016. [116] R.K. Sharma, C.E. Bash, C.D. Patel, Dimensionless parameters for evaluation of thermal design and performance of large-scale data centers, in: 8th AIAA/ASME Joint Thermophysics and Heat Transfer Conference, 2002. [117] M.K. Herrlin, Rack cooling effectiveness in data centers and telecom central offices: The Rack Cooling Index (RCI). American Society of Heating, Refrigerating and Air-Conditioning Engineers, 2005. [118] M.K. Herrlin, Improved data center energy efficiency and thermal performance by advanced airflow analysis, Digital Power Forum, 2007. [119] J. Cho, J. Yang, C. Lee, J. Lee, Development of an energy evaluation and design tool for dedicated cooling systems of data centers: sensing data center cooling energy efficiency, Energy Build. 96 (2015) 357–372. [120] B. Lajevardi, K.R. Haapala, J.F. Junker, Real-time monitoring and evaluation of energy efficiency and thermal management of data centers, J. Manuf. Syst. 37 (2015) 511–516. [121] A.H. Beitelmal, D. Fabris, Servers and data centers energy performance metrics, Energy Build. 80 (2014) 562–569. [122] A. Capozzoli, G. Serale, L. Liuzzo, M. Chinnici, Thermal metrics for data centers: a critical review, Energy Proc. 62 (2014) 391–400. [123] J. Yuventi, R. Mehdizadeh, A critical analysis of Power Usage Effectiveness and its use in communicating data center energy consumption, Energy Build. 64 (2013) 90–94. [124] G.A. Brady, N. Kapur, J.L. Summers, H.M. Thompson, A case study and critical assessment in calculating power usage effectiveness for a data centre, Energy Convers. Manage. 76 (2013) 155–161. [125] R. Tozer, C. Kurkjian, M. Salim, Air management metrics in data centers, ASHRAE Trans. 115 (2009) 63–70.

Server liquid cooling with chiller-less data center design to enable significant energy savings, in: 28th IEEE SEMI-THERM Symposium, 2012, pp. 212–223. B. Durand-Estebe, C. Le Bot, J.N. Mancos, E. Arquis, Simulation of a temperature adaptive control strategy for an IWSE economizer in a data center, Appl. Energy 134 (2014) 45–56. Z. Wang, X. Zhang, Z. Li, M. Luo, Analysis on energy efficiency of an integrated heat pipe system in data centers, Appl. Therm. Eng. 90 (2015) 937–944. T.G. Grid, Green grid metrics: describing data center power efficiency, White Paper, 2007. H. Tian, Z. He, Z. Li, A combined cooling solution for high heat density data centers using multi-stage heat pipe loops, Energy Build. 94 (2015) 177–188. F. Zhou, X. Tian, G. Ma, Investigation into the energy consumption of a data center with a thermosyphon heat exchanger, Chin. Sci. Bull. 56 (2011) 2185–2190. H. Zhang, S. Shao, H. Xu, H. Zou, C. Tian, Integrated system of mechanical refrigeration and thermosyphon for free cooling of data centers, Appl. Therm. Eng. 75 (2015) 185–192. K. Zhang, X. Zhang, S. Li, An air source system combined underfloor air distribution with PCM and the method of energy storage and release, 2014. K. Zhang, X. Zhang, S. Li, A water looped system combined underfloor air distribution with PCM and the method of energy storage and release, 2014. K. Ebrahimi, G.F. Jones, A.S. Fleischer, A review of data center cooling technology, operating conditions and the corresponding low-grade waste heat recovery opportunities, Renew. Sustain. Energy Rev. 31 (2014) 622–638. T. Lu, X. Lü, M. Remes, M. Viljanen, Investigation of air management and energy performance in a data center in Finland: case study, Energy Build. 43 (2011) 3360–3372. S. Zimmermann, I. Meijer, M.K. Tiwari, S. Paredes, B. Michel, D. Poulikakos, Aquasar: a hot water cooled data center with direct energy reuse, Energy 43 (2012) 237–245. A. Haywood, J. Sherbeck, P. Phelan, G. Varsamopoulos, S.K.S. Gupta, Thermodynamic feasibility of harvesting data center waste heat to drive an absorption chiller, Energy Convers. Manage. 58 (2012) 26–34. N. Ahuja, C. Rego, S. Ahuja, M. Warner, A. Docca, Data center efficiency with higher ambient temperatures and optimized cooling control, in: 27th Annual IEEE Semiconductor Thermal Measurement and Management, SEMI-THERM 27 2011, March 20, 2011–March 24, 2011, Institute of Electrical and Electronics Engineers Inc., San Jose, CA, United States, 2011, pp. 105–109. T.J. Breen, E.J. Walsh, J. Punch, A.J. Shah, N. Kumari, C.E. Bash, et al., Influence of experimental uncertainty on prediction of holistic multi-scale data center energy efficiency, in: ASME 2011 Pacific Rim Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Systems, MEMS and NEMS, vol. 2, 2011, pp. 553–563. F. De Lorenzi, C. Vomel, Neural network-based prediction and control of air flow in a data center, J. Therm. Sci. Eng. Appl. 4 (2012) 021005–21008. B. Durand-Estebe, C. Le Bot, J.N. Mancos, E. Arquis, Data center optimization using PID regulation in CFD simulations, Energy Build. 66 (2013) 154–164. Z. Song, B.T. Murray, B. Sammakia, Airflow and temperature distribution optimization in data centers using artificial neural networks, Int. J. Heat Mass Transf. 64 (2013) 80–90. M. Lin, S. Shao, X. Zhang, J.W. VanGilder, V. Avelar, X. Hu, Strategies for data center temperature control during a cooling system outage, Energy Build. 73 (2014) 146–152. Z.M. Pardey, D.W. Demetriou, H.S. Erden, J.W. VanGilder, H.E. Khalifa, Schmidt RR. Proposal for standard compact server model for transient data center simulations, in: 2015 ASHRAE Winter Conference, January 24, 2015–January 28, 2015, Amer. Soc. Heating, Ref. Air-Conditoning Eng. Inc., Chicago, IL, United States, 2015, pp. 413–421. J.W. VanGilder, C.M. Healey, Z.M. Pardey, X. Zhang, A compact server model for transient data center simulations, in: 2013 ASHRAE Annual Conference, June 22, 2013–June 26, 2013. PART 2 ed., Amer. Soc. Heating, Ref. Air-Conditioning Eng. Inc., Denver, CO, United States, 2013, pp. 358–370.

231