Accepted Manuscript Title: Data Center Power Minimization with Placement Optimization of Liquid-Cooled Servers and Free Air Cooling Author: Li Li Wenli Zheng Xiaodong Wang Xiaorui Wang PII: DOI: Reference:
S2210-5379(16)00006-8 http://dx.doi.org/doi:10.1016/j.suscom.2016.02.001 SUSCOM 140
To appear in: Received date: Accepted date:
11-5-2015 9-2-2016
Please cite this article as: Li Li, Wenli Zheng, Xiaodong Wang, Xiaorui Wang, Data Center Power Minimization with Placement Optimization of Liquid-Cooled Servers and Free Air Cooling, (2016), http://dx.doi.org/10.1016/j.suscom.2016.02.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Highlights
te
d
M
an
us
cr
ip t
1. We have integrated free air cooling into the hybrid cooling system. While the conference version considers only liquid cooling, we have extended the paper to consider a novel and timely problem: Coordination with free air cooling. 2. We have presented new evaluation results in three new experimental subsections and test our scheme from more perspectives. 3. We have added detailed discussion about different cooling techniques.
Ac ce p
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Page 1 of 15
Data Center Power Minimization with Placement Optimization of Liquid-Cooled Servers and Free Air CoolingI Li Li, Wenli Zheng, Xiaodong Wang, Xiaorui Wang
ip t
Department of Electrical and Computer Engineering , The Ohio State University, USA
Abstract
M
an
us
cr
Most of today’s data centers are equipped with servers that rely on air cooling, which is well known to have low cooling efficiency due to undesired air recirculation. As a result, many data centers have started to adopt liquid cooling and free air cooling for improved cooling efficiency. In this paper, we make two key observations. First, since data centers normally replace only a portion of their servers at a time, an important problem is where in the data center to place those new liquid-cooled servers for the best return on their investment. Given the complex thermal dynamics in a data center, we find, in our process of deploying liquidcooled servers, different placement strategies lead to significantly different cooling power consumption. Second, different cooling techniques, including traditional air cooling, liquid cooling, and the emerging free air cooling, must be intelligently coordinated with dynamic workload allocation in order to minimize the cooling and server power of a data center. Based on the two observations, we propose SmartPlace, an intelligent placement algorithm that deploys liquid-cooled servers to minimize the power consumption of the data center cooling system. SmartPlace also takes into account the coordination with free air cooling and dynamic workload distribution among servers for jointly minimized cooling and server power in the entire data center. We compare SmartPlace with a state-of-the-art cooling optimization solution for two data centers with 1,280 and 10,240 servers, respectively. The results show that SmartPlace achieves up to 26.7% (with an average of 15%) less total power consumption with dynamically guaranteed application response time. Our hardware testbed results also demonstrate the effectiveness of SmartPlace. Moreover, we analyze how soon a data center can gain a full return on the capital investment of liquid-cooled servers.
ed
Keywords: Data Center, Liquid Cooling, Free Air Cooling, Workload Distribution, Power Management.
1. Introduction
pt
Recent studies have shown that a significant portion of the power consumption of many data centers is caused by the inefficiency of their cooling systems [2][3]. In order to quantify the cooling efficiency, data centers commonly employ a metric called Power Usage Effectiveness (PUE), which is defined as the ratio of the total data center energy consumption to the energy consumed by IT equipment such as servers. Currently, a PUE of 2.0 or higher is still common to most existing commercial or governmental data centers according to a survey in 2012 [4]. It is thus important to improve the cooling efficiency of data centers. To meet the cooling challenge of servers, many data centers have started to consider liquid cooling and free air cooling. Liquid cooling can provide quiet, uniform and efficient cooling by isolating IT equipment from the existing HVAC (heating, ventilation, and air conditioning) systems. For example, Google has built a water-cooled data center in Hamina, Finland [5]. Liquid
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
I This work was supported, in part, by NSF CAREER Award CNS-1143607. This is a significantly extended version of a conference paper [1]. Email addresses:
[email protected] (Li Li),
[email protected] (Wenli Zheng),
[email protected] (Xiaodong Wang),
[email protected] (Xiaorui Wang)
Preprint submitted to Elsevier
cooling has been shown to be a powerful alternative to traditional air cooling that is widely adopted in existing data centers [6]. The main reason is that liquid coolants commonly have much better heat transfer properties than air. Free air cooling has also been adopted by some data centers in recent years. It leverages the relatively cold air outside the data center for cooling and thus efficiently saves the power of chilling the hot air returned from the IT equipment. Given those emerging new cooling techniques, hybrid cooling, i.e., applying different cooling techniques in the same data center, has been increasingly adopted as a more practical solution. For example, the European Council for Nuclear Research has a hybrid cooled data center which has nearly 9% of the servers liquid cooled [7] and contains the free cooling configurations. One of the advantages of hybrid cooling is that it can provide localized liquid cooling to selectively cool down any potential hot spots (e.g., servers at the top of a rack), such that the global air cooling systems can operate at a much higher temperature set point for a higher cooling efficiency at a much lower cost (than having all servers liquid-cooled). As shown in a recent study of a large collection of field data from different production data centers [8], the high power consumption of the CRAC systems in today’s data centers is mainly due to their unnecessarily low temperature set points. Approximately 2 − 5% cooling power can be saved if we simply increase the set points May 11, 2015
Page 2 of 15
2
an
us
cr
ip t
tion may increase server idle power, because distributing workload to more servers can reduce cooling power due to lower server temperatures but increase the number of active servers [10][12]. Therefore, we further propose to jointly minimize both cooling and server power by dynamically turning on/off selected(idle) servers in response to varying workload, in addition to server placement and workload distribution. Note that by turning off servers, we actually put them into a deep sleep mode such that they can quickly wake up with almost negligible overheads (e.g. 30 µs [13]). After the deployment of liquid cooled servers, SmartPlace integrates free air cooling and takes into account how to efficiently operate such a hybrid cooling system. In order to minimize the power consumption of a hybrid-cooled data center at runtime, we face several new challenges. First, the different characteristics of these three cooling systems (liquid cooling, free air cooling and the traditional CRAC air cooling) demand for a systematic approach to coordinate them effectively. Second, workload distribution in such a hybrid-cooled data center needs to be carefully planned, with the consideration of various factors such as the varying ambient temperature. We use a power optimization scheme to optimize the total power consumption of a hybrid-cooled data center by intelligently managing the hybrid cooling system and distributing the workload. We compare SmartPlace with a state-of-the-art cooling optimization solution and show that SmartPlace achieves up to 26.7% less total power consumption with guaranteed application response time. We also test some commonly used strategies for liquid-cooled server placement to show the advantage of our algorithm. Furthermore, we analyze the optimal number of liquid-cooled servers to purchase, such that the total cost of the data center is minimized in a five-year period (i.e., the server lifetime), as well as how soon the saved cooling cost can exceed the extra capital investment of those servers and related installation. Our result shows that a data center with 1280 servers and a 42% average workload (represents the CPU utilization) can gain a full return on the investment in only less than 3 years, if it installs 540 liquid-cooled servers. It is important to note that SmartPlace is not limited to liquid-cooled servers and can be generalized to place servers with different thermal profiles for power minimization. To our best knowledge, SmartPlace is the first one that studies the impacts of physical positions of servers with different thermal profiles on data center cooling efficiency. In addition, we also compare SmartPlace with the state-of-the-art cooling system management schemes and show that SmartPlace achieves higher utilization of free air cooling and leads to significantly more energy savings. Specifically, our major contributions are as follows:
pt
ed
M
by 1◦ C [9]. Another advantage of hybrid cooling is the inclusion of free air cooling. When the air outside the data center meets the cooling requirement, data centers can use the cold outside air to cool down the servers. As a result, hybrid cooling efficiently saves the cooling power and thus highly reduces the cooling cost. For liquid cooling, there are currently different ways to implement it in data centers, such as 1) directly cooling only microprocessors with cold plates, 2) enclosing a rack to make it thermally neutral, and 3) submerging servers completely in dielectric fluid. Since these approaches can change the already complex thermal dynamics of a data center in a sophisticated way, the impacts of liquid or hybrid cooling need to be carefully analyzed to indeed improve cooling efficiency. When a data center operator needs to replace some air-cooled servers with new-generation liquid-cooled servers, an important question to be answered is where in the data center to place those new liquid-cooled servers for the best improved cooling efficiency and so the highest return on the investment. We find, in the process of deploying liquid-cooled servers in our server room, different placement strategies can lead to significantly different cooling efficiencies. A naive solution would be to replace the hottest servers with liquid cooling, which however often leads to inferior efficiency, because those hottest servers can be dynamically shut down when workload distribution is optimized. For free air cooling, an important problem is how to efficiently coordinate such a hybrid cooling system which includes traditional air cooling, liquid cooling and free air cooling. Currently, existing data centers adopt multiple cooling techniques commonly use some preset outside temperature thresholds to switch between different cooling systems, regardless of the time-varying workload. Such a simplistic solution can often lead to unnecessarily low cooling efficiencies. Although some previous studies [10] have proposed to intelligently distribute the workload across the servers and manage the cooling system according to the real-time workload to avoid over-cooling, they address only one certain cooling technology and thus the resulted workload distribution might not be optimal for the hybrid cooling system. In this paper, we propose SmartPlace, an intelligent algorithm that uses a given number of liquid-cooled servers to replace selected air-cooled servers in a hybrid-cooled data center and effectively coordinates different cooling techniques with dynamic workload distribution for minimized power consumption. We first model the impacts of the physical locations of liquid-cooled servers, as determined by a placement strategy, on cooling power consumption. Since the workload distribution among servers can significantly affect cooling efficiency [11][12], we then formulate a constrained optimization problem for cooling power minimization, to determine both the optimal server placement and the workload distribution among different servers. Since server placement is an offline decision but workload varies dynamically, we try to find a single placement strategy that works best for most workload levels, as explained in detail in Section 4.2. It has also been shown recently that reducing cooling power with workload distribu-
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
• We propose to address two important problems: intelligent placement of liquid-cooled servers and efficient management of different cooling systems in a data center for jointly minimized cooling and server power. • We formulate three constrained optimization problems to 1) minimize the cooling power by server placement and workload distribution 2) jointly minimize both cooling
Page 3 of 15
3
and server power by additionally turning on/off selected (idle) servers in response to the varying workload 3) minimize the total power by dynamically adjust the cooling mode according to the varying workload and ambient temperatures.
to workload distribution and putting servers to sleep. To our best knowledge our work is the first one that proposes server placement (instead of workload placement) and efficiently coordinate different cooling systems for minimized cooling and server power.
pt
ed
M
an
us
cr
ip t
• We integrate application response time guarantee into power 3. Background on Different Cooling Technologies minimization. We compare our algorithm with a stateof-the-art solution [10] and commonly used placement In this section, we introduce three commonly adopted coolstrategies for two data centers with 1,280 and 10,240 servers, ing systems in the existing data centers. respectively. The results show the advantages of our soTraditional CRAC Air Cooling: This is the most widely lution. Our hardware testbed results also demonstrate the used cooling technology in existing data centers. This system effectiveness of SmartPlace. deploys several CRAC units in the computer room to supply cold air. The cold air usually goes under the raised floor before • We quantitatively analyze the optimal number of liquidjoining in the cold aisle through perforated tiles to cool down cooled servers to purchase for minimized data center costs the servers. The hot air from the servers is output to the hot aisle and how long it takes to gain a full return on the capital and returned to the CRAC system to be cooled and reused. The investment of those servers. deployment of cold aisle and hot aisle is used to form isolation The rest of the paper is organized as follows. We review the between cold and hot air. However, due to the existence of related work in Section 2. In Section 3, we introduce the backseams between servers and racks, as well as the space close to ground on different cooling system. Section 4 formulates the the ceiling where there is no isolation, cold air and hot air are optimization problems to minimize cooling and server power often mixed to a certain extent, which decreases the cooling and the problem to intelligently coordinate different cooling efficiency. systems. We discuss our simulation results in Section 5 and Liquid Cooling: In this paragraph, we discuss three widely hardware results in Section 6. Finally, Section 7 concludes the adopted liquid cooling technologies. paper. Direct liquid-cooling: In this technique, the CPU in a server is directly attached with a cold plate, while other components are cooled by chilled air flow. Direct liquid cooling improves 2. Related Work the cooling efficiency by enhancing two heat transfer processes: the heat-sink-to-air heat transfer process and air-to-chilled-water Air cooling is used in most existing data centers. Previous heat transfer process. For example, one of the data center modwork has studied various configurations of the air cooling sysels we use in this paper has 1,280 servers. The total cost for tem (e.g., [14][15]). Different from the aforementioned work, the installation of the liquid cooling system (including all the we optimize the cooling efficiency of a hybrid liquid-air cooling needed devices and related costs such as piping, monitoring, system in this paper. and detection) is approximately $396, 000 [28]. Liquid cooling has been proposed for computer systems at Rack-level liquid cooling: In this technique, the hot exhaust different levels. Most work on thermal modeling and manageair from the servers is cooled down through a liquid-cooled ment for liquid cooling has been studied at the chip level (e.g., door. A liquid-cooled door is a device installed on the back [16][17]). At the server level, hybrid-cooled servers with air of a rack. The hot exhaust air leaving the servers first encouncooling and water cooling devices are studied in [6]. Rubenters this device and thus gets cooled down. We assume that stein et al. [18] have constructed an analytic model for hybrideach rack in our example data center model consists of 40 1U cooled data centers. Free air cooling have also attracted wide servers, such that there are 32 racks in total in the data center. research attentions. Christy et al. [19] study two primary free To replace all the racks with liquid-cooled racks, a total cost of cooling systems, the air economizer and the water economizer. $987, 968 is needed [29]. Gebrehiwot et al. [20] study the thermal performance of an Submerge cooling: This approach submerges servers in liqair economizer for a modular data center using computational uid, usually mineral oil. Its cooling enclosures can eliminate the fluid dynamics. While those studies focus only on the cooling need for CRAC units and chillers, allowing users to cool highdevices, our work jointly consider server placement, workload density servers at a fraction of the cooling cost of traditional distribution, server on/off and cooling mode selection for miniracks. The total cost is about $588, 000 [30]. mized cooling and server power at the data center level. Comparison of the three liquid cooling techniques is shown Workload placement has been previously studied for power in Table 1. Based on the above analysis, we use the direct liquid minimization and thermal management in data centers (e.g., cooling technique as an example to demonstrate the effective[21][22][23][24][25][26]). In particular, Ahmad et al. [10], ness of our solution, because it is the most commonly adopted Li et al. [12] and Li et al. [27] have recently proposed to technique in industry, due to its lowest cost and least requirejointly optimize idle power and cooling power. In sharp conments for maintenance. Direct liquid cooling has also been trast to the existing work, we determine the optimal placement studied in previous research [6, 31] for similar reasons. It is of liquid-cooled servers and efficient cooling mode, in addition
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Page 4 of 15
4
Table 1: Comparison of three liquid cooling techniques in a data center with 1,280 servers
Advantage Flexible Implementation Improved working condition No need of fans Workload Distribution
ip t
cr
Warm Air
CDU
Cool Air
Locations of Liquid Cooled Servers
Figure 1: Cooling System of Hybrid-Cooled Data Center. The cold plates used for liquid cooling are installed inside the liquid-cooled servers.
ed
pt
4. SmartPlace: Cooling and Server Power Minimization In this section, we first formulate two constrained optimization problems to minimize 1) cooling power only and 2) cooling and server power jointly through intelligently deploying liquid cooled servers. Then we formulate another optimization problem to minimize the total power through integrating free air cooling and efficiently coordinating different cooling systems. 4.1. Cooling Power Optimization
Figure 1 illustrates the cooling system of a hybrid-cooled data center, which includes traditional air cooling and direct liquid cooling. Both of them use chiller and cooling tower to provide coolant. The traditional air cooling system uses CRAC units to move heat from the returned hot air. The liquid cooling system uses Coolant Distribution Unit (CDU) to provide coolant to each cold plate in the server. Maintenance is vital to the successful implementation of liquid cooling in data centers and is an active research topic in mechanical engineering [34][28], which is beyond the scope of this paper. In this paper, we assume that maintenance has already been provided in the
∆T = Q × Rθ
(1)
In the case of direct liquid cooling, Equation 1 can be represented as:
M
important to note that our solution can also be applied to the other two liquid cooling techniques. Free Air Cooling: Free air cooling is a highly efficient cooling approach that uses the cold air outside the data center and saves power by shutting off the chiller system [32][33]. It is usually utilized within a range of outside temperature and humidity. Within this range, the outside air can be used for cooling via an air handler fan. The traditional CRAC system is employed by these data centers as the backup cooling system.
us
Cool Water
Warm Water
Capital Cost $396, 000 $987, 968 $588,000
data center. We model the maintenance costs and focus primarily on improving cooling efficiency. The cooling power required to cool a data center is determined by the server power and COP (coefficient of performance) as Pserver /COP . COP represents the efficiency of the cooling system. A higher value of the COP means more efficient cooling. We use a commonly adopted air cooling COP from [11] in this work. For liquid cooling, cooling power mainly includes chiller power that is used to take away heat from the liquid and the pump power that is used to circulate the liquid. We use the general principle that the temperature drop ∆T across a given absolute thermal resistance Rθ with a given heat flow Q through it is:
Hot Aisle
Cold Aisle
Chiller
Hot Aisle
Cooling Tower
CRAC
Air Cooled Server
Disadvantage More tubing connections Hard to maintain,high cost Complex implementation
an
Technique Direct Liquid Cooling Rack Level Cooling Submerge Cooling
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Tmp − Tchillersup = Q × (Rp + RT IM + Rcp )
(2)
where Tmp is the temperature of the microprocessor, Tchillersup is the temperature of the cool water provided by the chiller, Q is the heat transfer rate between the microprocessor and cold plate, Rp is the thermal resistance of the chip package, RT IM is the thermal resistance of the thermal interface materials located between the package and the cold plate, and Rcp is the thermal resistance of cold plate which is related to the flow. Rcp decreases with the increase of rate at which the coolant flows into the cold plate, according to [31]. For the reason that pump power is negligible compared with chiller power [31], we use the COP of chiller to represent the efficiency of a liquid cooling system. The COP of chiller is determined by the chiller set point (reference temperature of cool water supplied by chiller) according to [35]. COPliquid ≈ COPchiller =
1 (3) a(1 + b(Tchillersup − T0 )
where T0 is the chiller set point, a and b are chiller-dependent coefficients. In order to guarantee that the microprocessor temperature is below the threshold, we first run the CPU at full utilization and set the flow rate to the maximum. We then get the lowest chiller set point to guarantee that the microprocessor work in the safe temperature range. Finally, we use that chiller set point to determine the COP of liquid cooling system according to Equation 3. Before we present our power consumption minimization problem, we first define the following notation: • N is the number of servers in the data center.
Page 5 of 15
5
• M is the number of liquid-cooled servers.
Recirculated Air
Returned Air
• Tiout is the outlet temperature of server i (Celsius).
• Piidle is the static idle power consumed by the server.
• βi is a binary variable used to indicate whether server i should be liquid-cooled (βi = 1) or air-cooled (βi = 0).
PN PN +
i=1 ((1
i=1 (1
− α)βi (Wi × Picompute + Piidle ) COPliquid
− βi ) + αβi )(Wi × Picompute + Piidle ) COPair
(4)
ed
Pcooling =
subject to the constraints:
N X
βi = M
pt
out in Tiout < Tthreshold , Tiin < Tthreshold
(5)
(6)
i=1
N X
Wi = Wtotal
cr
Ki Tiout =
n X
j=1
j=1
n X
hji Kj Tjout + (Ki −
hji Kj )Tcracsup + Piair
(8) hij is the percentage of heat recirculated from server i to server j. Ki = ρfi Cp , where Cp is the heat capacity of air, ρ is the air density, and fi is the incoming air flow rate to server i. Ki Tiout is the amount of heat carried in the outgoing air flow from server i. Tcracsup is the temperature of the air discharged into the plenum by the CRACs. Piair is the part of power consumption of server i which can affect its outlet temperature. In our experiments, we use the computational fluid dynamics (CFD) software package to get hij in different scenarios.
M
With the above notation, our optimization problem is formulated as: M in{Pcooling }
and use it to predict the temperature distribution. According to the model, the outlet temperature of each server can be represented as:
us
• α is the percentage of energy consumed by components other than CPU, including memory, disk, etc.
Figure 2: Air circulation in an air-cooled data center. Some hot air can be recirculated to the inlet and mixed with the cold air, degrading the cooling efficiency.
an
• Wi is the workload handled by server i, in terms of CPU utilization.
CRAC
ip t
• Picompute is the maximum computing power when the workload is 100%.
Hot Aisle
Hot Aisle
• Pi is the server power consumption of server i and it includes the idle power Piidle and compute power Picompute of the server.
Cold Aisle
• Tiin is the inlet temperature of server i.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
(7)
i=1
The first term in Equation 4 is the liquid cooling power that is used to take away the heat generated by the microprocessors of liquid-cooled servers, while the second term is the air cooling power that is used to take away the heat generated by all other server components and the heat generated by air-cooled servers. The inlet and outlet temperatures of each server (Tiin and Tiout ) are mainly impacted by three factors (in Equations 8 and 9): CRAC output temperature (Tcracsup ), server workload (Wi ), and the air recirculation pattern (hji ) in the data center. We use outlet and inlet temperatures as constraints in our problem formulation to capture all the above three characteristics in the data center. Figure 2 is a diagram illustrating the air recirculation in an air-cooled data center. We adopt the air recirculation model in [36] to characterize the recirculation impacts in our data center
Piair = (1 − βi )Pi + (α)βi Pi
(9)
where Pi = Wi × Picompute + Piidle [37]. Piair = Pi when server i is completely air cooled, which means the outlet temperature of server i is affected by its total power consumption. On the other hand, when server i is liquid cooled, Piair = αPi , which means that outlet temperature of server i is affected only by the power consumption of the non-CPU components, since the heat generated by CPU is taken away by the liquid cooling system. Cooling conditions, such as the air flow rate and the recirculation impact, vary across different locations of a data center. For data centers with the standard configuration of alternating hot and cold aisles [23][38][11], when the workload is uniformly distributed, air flow in the middle of the aisles commonly has a lower temperature than that at the ends of the aisles. Recirculation effects are also different for different servers. Hot air from servers far away from a CRAC commonly has more recirculation impact on other servers. The reason is that the outlet air of those servers cannot be absorbed by the CRAC immediately if it is not extracted through the ceiling tiles. Placement of liquid servers (βi ) and workload distribution impact Piair according to Equation 9. Piair then impacts the outlet temper-
Page 6 of 15
6
up later when the workload increases. As estimated in [13], microprocessors require approximately 30µs to transition from sleep back to active and DRAM in future servers can use less than 1µs for the transition. Therefore, it is feasible to quickly put servers into sleep for minimized idle power. Note that we put only idle servers into sleep so there is no need to do any workload migration. To characterize the ON (active) and OFF (sleep) states of the machine, we introduce a binary variable γi , and then Piair = ((1 − βi )Pi + (α)βi Pi )γi . With this additional knob, we can minimize the total power consumption of a data center. We call this total power minimization scheme as SmartPlace+S. M in{Ptotal }
Workload Trace
subject to the constraints (5), (6) and:
cr
Power Optimization
Wtotal =
Placement:
us
Cold Aisle
Hot Aisle
Cold Aisle
Air-Cooled Server
Hot Aisle
Air-cooled: 4 Liquid-cooled: 2 Liquid-Cooled Server
Online Part Workload Distribution
N X
γi Wi
(10)
i=1
The new constraint (10) represents that all the workload should be distributed on active servers. Note that this optimization scheme only determines the number of active servers and locations of liquid-cooled servers in an offline manner. To handle varying workload (i.e., Wtotal ), we pre-determine the state of servers in a static way. According to the workload traces, we first get the highest and lowest levels of workload during a certain period. We then do optimization from the lowest to the highest with a granularity of 0.1%. After that we can get a solution table which records which servers should be turned off at a given workload level. We can then turn on/off servers according to the table at runtime.
an
Cost Optimization
Occurrence Frequency
Hot Aisle
Offline Part
Time Frame
ip t
Figure 3: The 1,280-server data center model used in evaluation.
Hot Aisle
M
Figure 4: Total power optimization with varying workload.
pt
ed
ature of server i through Equation 8. Different server placement strategies also impact air recirculation (hij ) differently as shown in Equation 8. In order to guarantee that every server works in the safe temperature range, the supply temperature of the CRAC unit should be adjusted, which in turn influences the efficiency of the cooling system. Therefore, placement of liquid servers and workload distribution impact the cooling power consumption according to Equation 4. The objective of this problem is to find the optimal set of βi (server placement) and Picompute (workload distribution) that minimizes the cooling power consumption. It is important to note that the optimization result depends on the total amount of workload of the data center (Wtotal ), which varies at runtime. While workload distribution can be adjusted dynamically, server placement is an offline decision that cannot be changed online. To handle varying workload, we analyze the workload traces of the data center to find all the possible workload levels, from the lowest to the highest with an increment of 0.1%. We then conduct our optimization offline to find the optimal placement for every workload level. Then according to the frequency of occurrence of each workload level in the trace, we choose the most popular M locations for the placement of liquid-cooled servers. We discuss this in detail in Section 5.5.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
4.3. Response Time Maintenance Extension of response time is a key concern for data center power-reduction techniques. Our first scheme, SmartPlace (Section 4.1), does not extend response time as it keeps all the servers active. Our second scheme, SmartPlace+S (Section 4.2), keeps only a subset of servers on, potentially leading to some extension of response time if workload increases. Hence SmartPlace+S takes response time into consideration when it determines the number of active servers needed for a given workload. Note that SmartPlace+S can also be extended to consider other performance metrics (such as throughput). We calculate the number of active servers using queuing theory. A data center can be modeled as a GI/G/m queue. Using the Allen-Cullen approximation for the GI/G/m model [39], the response time and the number of servers needed to satisfy a given workload demand are related as follows: W =
2 Pm C 2 + CB 1 + ( A ) µ µ(1 − ρ) 2m
4.2. Formulation for Joint Power Minimization The previous problem formulation mainly focuses on minimizing the cooling power in the data center with a hybrid cooling system. With the workload being unevenly distributed, some servers may not need to handle any workload. In such a case, those servers can be put into sleep to save idle power and waken
where W is the mean response time.
1 µ
(11)
is the mean service
λϕ time of a server. λ is the mean request arrival rate. ρ = mf is the average utilization of a server. ϕ is the mean request size. f = µϕ is the service rate of a server. m is the number (m+1) of servers available to serve the request. Pm = ρ 2 for
Page 7 of 15
7
Power (kW)
60 40
Hottest+Uniform Hottest+Distribution SmartPlace
20 0
250
150
80
20
30
40
50
60
70
80
Power (kW)
100
Power (kW)
100
Hottest+Uniform Hottest+Distribution SmartPlace
50
0
90
Number of liquid cooled blocks
20
30
40
50
60
70
80
(a) Cooling power with a 30% workload.
Hottest+Uniform Hottest+Distribution SmartPlace
200 150 100 50 0
90
20
30
40
50
60
70
80
90
Number of liquid cooled blocks
Number of liquid cooled blocks
(b) Cooling power with a 50% workload.
(c) Cooling power with a 70% workload.
us
cr
with the increase of the number of liquid-cooled servers, while OpEX on the other hand is monotonically decreasing. Consequently, the sum of these two costs can have a global minimum. Our total cost minimization problem aims to find this global minimum as: min{Costtotal = CapEX + OpEX}
1. CapEX, the costs of purchasing liquid-cooled servers and installing the liquid cooling system: CapEX = CostLiquidCooledServers ∗ M + Costinstall . M represents the number of liquid-cooled servers we buy. Costinstall includes the installation of the new liquid cooling system, valves, piping, building leak detection. 2. OpEX, the electricity cost: OpEX = Costelectricity ∗ Ptotal ∗ t + Costmaintenance . Costelectricity represents the price of electricity per kWh. Ptotal is the total power consumption as explained with SmartPlace+S. t is the lifetime of a liquid-cooled server that is assumed to be 5 years. Costmaintenance is the maintenance cost.
pt
ed
M
an
m
2 2 and CB represent ρ < 0.7 and Pm = ρ 2+ρ for ρ > 0.7 and CA the squared coefficient of variation of request inter-arrival times and request sizes, respectively. In this section, we propose three schemes to determine the number of servers that should be initially active in order to provide the desired response time guarantee. AverageLoading: Based on a given workload trace (or an estimation of future workload), we compute the average workload of the whole trace. The number of initially active servers is determined by using this average workload as Wtotal in the optimization of SmartPlace+S. In this way, we can concentrate the load on a subset of servers while meeting the requirement of response time. 50% Loading: Due to the fact that data center operates under 30% − 70% loadings during most of the time, we can use the 50% workload as Wtotal in the optimization to determine the initial number of active servers. When workload increases to be more than 50%, more servers should be powered on to provide more computing resources, such that the response time requirement can be met. Overprovisioning: This scheme over-provisions a certain number of active servers to handle the maximum possible increase in workload between two adjacent time intervals. We use the first data point in the trace file as Wtotal in the optimization to determine the number of initially active servers. The number of servers over-provisioned can be chosen based on the worstcase increase in the workload between two intervals based on history information. In this paper we over-provision 20 active servers and the time interval is 15 minutes for Wikipedia [40] and IBM [41] traces.
ip t
Figure 5: Cooling power comparison among SmartPlace and two baselines (Hottest+Uniform, Hottest+Distribution) under different computation workloads with different numbers of liquid-cooled server blocks.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
4.4. Optimization of Total Cost
Introducing liquid cooling to data center incurs additional cost as we have analyzed in Section 3. The total cost of running a data center, including capital cost (CapEX) and operational cost (OpEX), is a big concern to data center owners. In this section, we formulate a cost minimization problem with hybrid cooling. During the cooling system configuration process, CapEX depends on the expense of purchasing liquid-cooled servers, while OpEX depends on the electricity cost when operating a data center. Although purchasing liquid-cooled servers increases the CapEX, it also increases the cooling efficiency leading to a decreased OpEX. CapEX is monotonically increasing
The constraints are the same as those in Section 4.2. 4.5. Coordinating with Free Air Cooling As discussed before, hybrid cooling with the emerging free cooling technique has been increasingly adopted in today’s data centers. Therefore, after the configuration of liquid cooled servers, we propose another power optimization scheme to minimize server and cooling power through integrating free air cooling and intelligently managing the hybrid cooling system. We call this scheme SmartPlace+Free. When the free air cooling method is chosen for the hybridcooled data center, the air cooling power is calculated in a different way, according to [19]. server Pfair ree = (P U Ef ree − 1) ∗ Pair
(12)
In the experiment, free cooling PUE is modeled to be proportional to the ambient air temperature. This is because when the outside air temperature is relatively high, more air is needed to take away the heat generated by servers, thus the fan speed of air handling unit needs to be higher to draw more air. In this paper, we assume that only one of the two air cooling systems (traditional CRAC air cooling system and free air cooling system) can run at one time in a hybrid-cooled data center.
Page 8 of 15
8
min{P server + P air + P liquid }
(14)
Subject to: N X
Wi = Wtotal
(15)
i=1 mp 1≤i≤M Timp < Tth
(16)
in Tiin < Tth M +1≤i≤N
(17)
5.2. Comparison with Traditional Air Cooling Schemes We compare SmartPlace and SmartPlace-S with traditional workload distribution schemes in air cooled data center to show the power savings from intelligently deploying liquid cooled servers. Figure 7 shows the evaluation results on total power consumption with different data center workload under different power management schemes. Three three schemes widely used in existing air cooled data centers are Air-Spatial Subsetting, Air-Optimal Workload and Air-Uniform. Air-Spatial Subsetting concentrates all workload on as few servers as possible and turns off other servers to save idle power. Air-Optimal Workload keeps all the blocks in active state and distributes workload according to the temperature distribution. With AirUniform Workload, workload is uniformly distributed to each block. Moreover, for the three liquid cooling schemes 20 liquidcooled blocks are configured in the data center. Figure 7 shows that Air-Spatial Subsetting performs 31% better than Air-Optimal Workload and 37% better than Air-Uniform Workload when the workload is 30%. This is because when workload is low, hot spot does not happen often as the potential highest temperature in the data center is low. Thus using Air-Spatial Subsetting in this condition can save more power. On the other hand, when workload is high, hot spot often happen as the high potential highest temperature. Thus Air-Spatial Subsetting consumes more cooling power. We can also see that SmartPlace performs better than Air-Uniform Workload and Air-Optimal Workload, the improved performance here comes from introducing liquid cooled servers to data center, thus leading to a higher cooling efficiency. Also, we can see that SmartPlaceS performs best under the three different workload.
5. Simulation Results
pt
ed
M
Equation 15 guarantees that all the workload Wtotal is handled by the servers. Equation 16 enforces that the microprocessors’ temperatures of these M liquid-cooled servers are bemp low the required threshold Tth . Equation 17 enforces that the inlet temperatures of the (N − M ) air-cooled servers are bein low the required threshold Tth . It is important to note that our scheme performs offline optimization to determine workload distribution, server on/off and the cooling mode of the data center at different outside temperatures. To dynamically determine those configurations, our scheme can conduct the optimization for different loading levels in an offline fashion and then apply the results online based on the current loading and the current outside temperature.
ip t
where γ is a binary variable indicating which air cooling system is activated. We now formulate the power minimization problem of the hybrid-cooled data center. N servers are deployed in the data center and M of them are liquid-cooled. Assuming that the total workload is Wtotal , we minimize the total power consumption as:
cr
(13)
us
air P air = γPCRAC + (1 − γ)Pfair ree
also extended the data center model to include 10,240 servers for evaluations and cost analysis in a larger scale. To solve the optimization problems in Section 4, LINGO [43], a comprehensive optimization tool is used. LINGO employs branch-and-cut methods to break an non-linear programming model down into a list of sub-problems to improve the computation efficiency. It is important to note that our scheme performs offline configuration optimization, i.e., determining the optimal placement of liquid servers. To dynamically determine workload distribution and server on/off status at different workloads, we can run our scheme for different loading levels in an offline fashion and then apply the results online based on the current loading (discussed in detail in Section 5.5) as shown in Figure 4. Therefore, the time complexity is not a critical concern. To further reduce the problem complexity, we follow the approach in [10] to group 10 adjacent servers together as a block. We solve the optimization problems for blocks instead of servers, and determine whether to replace all the servers in an entire block as liquid-cooled. For each block, we consider its inlet/outlet temperatures to make sure all the servers in that block are in the safe temperature range.
an
Thus the total air cooling power consumption can be expressed as:
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
In this section, we introduce our evaluation methodology and present the evaluation results from simulation. 5.1. Evaluation Methodology
In our work, we use a data center model that employs the standard configuration of alternating hot and cold aisles, which is consistent with those used in the previous studies [23][38][11]. The data center consists of four rows of servers. Each row has eight racks, where each rack has 40 servers, adding up to 1,280 servers in total. The server consumes 100W when idle and 300W when fully utilized. The volumetric flow rate of the intake air of each server is 0.0068m3 /s. Each of the four CRAC units in the data center pushes chilled air into a raised floor plenum at a rate of 9000f t3 /min. Hot air from the servers is absorbed by the CRACs (no ceiling tiles). To simulate the thermal environment of the data center, a computational fluid dynamics (CFD) software package, Fluent [42], is used. Figure 3 shows both the data center layout and a thermal environment example when all the servers are air cooled servers. We have
5.3. Effectiveness of Server Placement In this section, we first compare cooling power consumption between SmartPlace and two baselines: Hottest+Uniform
Page 9 of 15
Power (kW)
150
Hottest+Liquid−First Hottest+Distribution SmartPlace SmartPlace+S
100 50 20
30
40
50
60
70
80
300 200
Hottest+Liquid−First Hottest+Distribution SmartPlace SmartPlace+S
100 0
90
500
Power (kW)
400
200
0
20
Number of liquid cooled blocks
30
40
50
60
70
80
90
400
9
300
Hottest+Liquid−First Hottest+Distribution SmartPlace SmartPlace+S
200 100 0
20
30
Number of liquid cooled blocks
(a) Total power with a 30% workload.
(b) Total power with a 50% workload.
40
50
60
70
80
90
Number of liquid cooled blocks
(c) Total power with a 70% workload.
Figure 6: Total power comparison among Hottest+Liquid-First, Hottest+Distribution, SmartPlace, and SmartPlace+S under different computation workloads with different numbers of liquid-cooled server blocks.
600
cr
300 200
30%
50%
us
100 0
air recirculation, Hottest+Distribution ends up with a lower COP and more cooling power consumption. Figures 5b and 5c show the cooling power consumption with a 50% and 70% workload, respectively. The results show the same trend as in Figure 5a. Note that cooling power is reduced more with SmartPlace when the data center workload is lower, because we can consolidate the workload on the servers with better temperature conditions (less air recirculation) and leave other servers running in the idle state. We can also see that compared with Hottest+Uniform, SmartPlace saves more cooling power with less liquid cooled servers. This is because when the number of liquid cooled servers is relatively low, the placement of liquid cooled servers will contribute more to the cooling efficiency of the whole data center. These experiments demonstrate the importance of placing servers with different thermal profiles in the right locations. We now compare the total power consumption with four different schemes: Hottest+Distribution, SmartPlace, SmartPlace+S (Section 4.2), and a new baseline Hottest+Liquid-First. In Hottest+Liquid-First, we replace the hottest servers as liquid cooled and then concentrate the workload first on liquid-cooled servers. If liquid cooled servers cannot handle all the workload, we put the remaining workload on the “coolest” air-cooled servers that have the least air circulation impacts. Figure 6a shows the total power consumption of the four schemes with different numbers of liquid-cooled server blocks and a 30% workload. We see that Hottest+Distribution consumes the most power. SmartPlace performs better than Hottest+ Distribution as explained before. Hottest+Liquid-First performs better than both Hottest+Distribution and SmartPlace, because Hottest+Liquid-First dynamically turns off servers when the workload is low, while the other two have all the servers on, resulting in higher server power. However, Hottest+Liquid-First simply distributes workload without considering the air circulation, resulting in much higher cooling power when the workload increases to 70% in Figure 6c. As a result, Hottest+LiquidFirst leads to the highest total power consumption when the workload is 70%. SmartPlace+S performs the best under different workloads because it conducts optimization to leverage all the available knobs: server placement, workload distribution, and server on/off, to minimize the total power consumption of the data center with the consideration of air recirculation.
ip t
400
Air−Spatial Subsetting Air−Optimal Workload Air−Uniform Workload SmartPlace SmartPlace+S
70%
Workload Figure 7: Comparison with traditional workload distribution schemes.
Ac ce
pt
ed
M
and Hottest+Distribution. We then evaluate their total (cooling and server) power consumption. In Hottest+Uniform, we emulate the common practice in production data centers to select the “hottest” servers with the highest outlet temperatures and replace them with liquid-cooled servers. We then uniformly distribute workload to each server. In Hottest+Distribution, we also select the hottest servers for placement but distributing the workload by considering the heat recirculation impacts as SmartPlace. We compare SmartPlace with Hottest+Distribution to highlight the advantage of intelligent server placement itself, as they use the same optimization for workload distribution. In this evaluation, we do not turn off servers and focus on cooling power. Figure 5a shows the cooling power consumption when different numbers of server blocks are replaced with liquid-cooled servers. The entire data center operates with a 30% workload. We see that Hottest+Uniform consumes the most cooling power. Hottest+Distribution consumes up to 30% less cooling power than Hottest+Uniform. This is because Hottest+Distribution distributes workload with the optimization scheme considering recirculation. Compared with Hottest+Uniform, SmartPlace consumes up to 40% less cooling power, because it considers liquid-cooled server placement and workload distribution together. For instance, when the workload is 30%, SmartPlace concentrates all the workload on servers in the middle of each row and at the bottom of the racks, because servers located in those places have less recirculation impacts from other servers and thus lower outlet temperatures. Since running workload would increase the outlet temperatures of those servers, SmartPlace replaces them as liquid cooled. In contrast, Hottest+ Distribution simply replaces the hottest servers with the highest outlet temperatures as liquid cooled and then distributes workload on those servers. Since those hottest servers are impacted significantly by other servers in the data center due to undesired
an
500
Power (kW)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Power (kW)
250
5.4. Comparison with PowerTrade In this section, we compare the total power consumption and electricity cost of SmartPlace+S with PowerTrade, a stateof-the-art cooling management scheme [10] that minimizes both cooling and idle power of a data center. The key difference
Page 10 of 15
10
Table 2: Total power reduction comparison with PowerTrade-d under different threshold temperature at various computation loadings.
23◦ C 14% 15% 26.7%
Workload 30% 50% 70%
25◦ C 12.8% 14.4% 18%
27◦ C 11.3% 11.5% 12.9%
29◦ C 9.5% 10.5% 11.5%
ip t
Table 3: Estimated electricity cost per year of different schemes with various computation workloads.
30% $189,560 $183,960
50% $315,184 $295,562
70% $488,107 $414,523
cr
Workload PowerTrade-d SmartPlace+S
an
us
performs better under lower threshold temperature and at higher loadings. Now we discuss the electricity cost of PowerTrade-d and SmartPlace+S. We use the average industrial price obtained from DOE [44] in our analysis. We can see the total electricity cost for one year from Table 3. Compared with PowerTrade-d, SmartPlace+S leads to lower electricity costs, because it consumes less cooling power. We can also see that the saving increases with the increase of data center workload. 5.5. Power Minimization with Response Time Guarantee In this section, we compare the power consumption of different schemes while maintaining the response time. We use two trace files: Wikipedia [40] and IBM [41], for evaluation. Each trace records the average CPU utilization every 15 minutes. The Wikipedia trace contains data for 26 days. The IBM trace records the data for one week. As discussed in Section 4.3, we propose three schemes to maintain response time: AverageLoading, 50% Loading and OverProvisioning. Figure 9 shows the average response time of the evaluated Wikipedia trace. We can see that OverProvisioning performs the best in meeting the requirement of response time while the other two schemes have greater degradation. Thus we use OverProvisioning in the following experiments. In our proposed scheme SmartPlace+S, the workload of the data center is assumed to be a constant. Therefore, in order to simulate a real workload trace where the workload varies over time, we should first decide the number and locations of liquid-cooled server blocks before the run time. By optimizing the total cost during the lifetime of liquid-cooled servers 500
Power (kW)
pt
ed
M
is that PowerTrade assumes that servers are homogeneous and thus does not differentiate servers with different thermal profiles (e.g., liquid-cooled servers). PowerTrade includes two schemes, PowerTrade-s (static) and PowerTrade-d (dynamic). PowerTrade-s first divides the data center into cool, warm and hot zones according to the air flow pattern. It then determines the load distribution across the zones by using a simple calibration run. For within-zone load distribution, PowerTrade-s employs Spatial Subsetting to consolidate workload onto as few servers as possible and then turns off other servers. PowerTrade-d uses an online approach to dynamically adjust the workload distribution according to the server temperatures. Figure 8 shows the total power consumption of the data center when different power management schemes are used for distributing workload and turning off unused servers for power savings. We evaluate the power consumption under different workloads, with 20 liquid-cooled server blocks. The total power is broken down into three parts: idle power, cooling power, and compute power. The result shows that PowerTrade-s consumes the most power because it does not consider the heat exchange caused by dynamic air flow across different zones, leading to both higher cooling and idle power. SmartPlace+S performs the best and consumes up to 18% less total power than PowerTrade-d. With a 30% workload, the idle power of PowerTrade-d and that of SmartPlace+S are almost the same. The difference between these two schemes is on cooling power. With the 50% and 70% workloads, there exist small differences on idle power between the two schemes, while the cooling power difference becomes prominent. This is because SmartPlace+S improves the cooling efficiency of the data center by placing liquid-cooled servers in the right locations, while Power Trade-d simply places them in the hottest locations and distributes workload without considering their different thermal profiles. We have discussed in the previous sections that the high power consumption of the CRAC systems in today’s data centers is mainly due to their unnecessarily low temperature set points. Now we discuss the experiments under different threshold temperature and compare the total power consumption between air cooling and liquid cooling schemes. In Table 2, we present the relative power saving achieved by SmartPlace+S and PowerTrade-d under different threshold temperatures at various loadings. In SmartPlace+S, we configure 20 liquid blocks in the data center. It shows that when data center is operating with a workload between 30% to 70% of its maximum capacity, SmartPlace+S consumes 9.5%-26.7% less power than PowerTrade-d with different threshold temperatures. The saving ratio increases as the threshold temperature becomes lower. For instance, at 30% loading, the total power saving from SmartPlace+S over PowerTrade-d is 9.5% when the temperature set point is 29◦ C. The saving ratio increases to 14% when threshold temperature becomes 23◦ C. Another fact we see is that SmartPlace+S saves more power at higher loadings. For instance when threshold temperature is 23◦ C, the total power saving over PowerTrade-d is 14% at 30% loading while it increases to 26.7% at 70% loading. This result shows that SmartPlace+S
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Idle Power Compute Power Cooling Power
400 300 200 100 0
a
b
30%
c
a
b
50%
c
a
b
c
70%
Figure 8: Detailed power consumption breakdown for different schemes (xaxis: a is PowerTrade-s, b is PowerTrade-d, and c is SmartPlace+S).
Page 11 of 15
300
PowerTrade−d SmartPlace+S
200 150
2000
Power (kW)
250
Power (kW)
Power (kW)
250
200
PowerTrade−d SmartPlace+S
11
1800 1600 1400 1200
100 0
100
200
300
400
500
600
150 0
700
Time (15mins per point)
100
200
300
400
500
600
700
0
100
(b) Total power consumption in response to the time-varying workload in the IBM trace.
200
300
400
500
600
700
Time (15mins per point)
Time (15mins per point)
(a) Total power consumption in response to the time-varying workload in the Wikipedia trace.
(c) Total power consumption for a larger data center with 10,240 servers with the IBM trace.
Figure 10: Trace-driven simulation shows that SmartPlace+S results in lower power consumption than PowerTrade-d [10].
0
100
200
300
400
500
600
cr
0.02
700
Time (15mins per point)
Figure 9: Average response time comparison among three response time maintenance schemes (requirement is 6ms).
Ac ce
pt
ed
M
(i.e., five years) according to Optimization of Total Cost proposed in Section 4.4, we obtain the number of server blocks that should be configured with liquid-cooled servers under the average workloads of the two trace files. It shows that 44 and 54 liquid-cooled server blocks should be deployed for running Wiki and IBM traces, respectively. For the locations of liquidcooled server blocks, we decide them in the following way: We first optimize the total power consumption, in an offline fashion, from the lowest workload level in the trace to the highest one to get the optimal locations for different workload levels, with an increment of 0.1%. Then according to the occurrence frequency of each workload in the trace, we choose the most popular locations to configure liquid-cooled blocks. Finally, we run the two traces online for the evaluation of power consumption. Figures 10a and 10b show the power consumption of SmartPlace+S and PowerTrade-d for the two traces. We can see that SmartPlace+S performs better because of the intelligent placement of liquid-cooled blocks. We also compare the two schemes using the IBM trace in a larger data center with 10,240 servers. Figure 10c shows the power consumption during one week when we replace 475 blocks as liquid-cooled. SmartPlace+S results in less total power consumption as expected while both the two schemes guarantee 95% percentile of response time.
us
0.04
installation of the new liquid cooling system, including valves, pumps, heat exchangers, piping, building monitoring, leak detection, and electrical connections. This part is estimated to be $88,920 [28]. Our analysis shows that SmartPlace+S saves $4, 827.14 per month for a data center with 1,280 servers, compared with Power Trade-d, which does not use liquid-cooled servers in this analysis. SmartPlace+S can get the investment in liquid cooling back in 35.20 months (less than 3 years). Our analysis also shows that in the data center with 10,240 servers, SmartPlace+S can save $56, 526.56 of electricity cost per month compared with PowerTrade-d. In order for a data center operator to determine the number of servers that should be replaced as liquid cooled in the data center, a comprehensive decision needs to be made based on four factors: data center workload, electricity price, capital cost of replacing liquid-cooled servers, and the life time of servers. For example, with the increase of workload, electricity price, and server life time, one should replace more servers as liquid cooled, because liquid cooled servers can save more electricity cost for a data center with a higher workload in the long term.
ip t
Overprovisioning AverageLoading 50% Loading
an
Response Time (s)
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
2200
300 PowerTrade−d SmartPlace+S
5.6. Cost Analysis
As discussed in Section 5.5, in order to minimize the total cost in a five-year period (i.e, the lifetime of servers), the optimal number of liquid-cooled servers to be installed is 540 (i.e., 54 blocks). We now analyze how soon the saved cooling cost can exceed the extra capital investment of those servers. The capital cost incurred for our data center mainly contains two parts. The first part is the cost of purchasing the cold plates and tubes for the 54 blocks, which is approximately $81,000, given that each cold plate with Press-Lock and Dia Tube costs around $150 [45]. The second part is the cost of furnishing and
5.7. Coordination of Different Cooling Systems After the deployment of liquid cooled servers, we integrate free air cooling and intelligently manage the hybrid cooling system. In this section, we compare our cooling systems coordination scheme SmartPlace+Free with two baselines: LoadUnaware and Liquid-First. Load-Unaware determines the cooling mode by comparing the outside temperature to a fixed temperature threshold, which is equal to the highest CRAC supply temperature that can safely cool the servers when they are all fully utilized. When the outside temperature is below the threshold free air cooling is used, otherwise the traditional air cooling system with chillers and pumps is selected. Load-Unaware prefers to distribute the workload to the liquid-cooled servers. If they are fully utilized, the remaining workload is then distributed to the air-cooled servers. The servers in the middle of each row and at the bottom of each rack are prior, as servers located at those places have less recirculation impact and lower inlet temperature. In contrast, Liquid-First dynamically adjusts the temperature threshold for free air cooling, based on the real-time workload. It first distributes workload to the servers in the same way as Load-Unaware, and then uses the highest CRAC supply temperature that can safely cool the servers as the temperature threshold.
Page 12 of 15
12 Load−Unaware Liquid−First SmartPlace+Free
140 130
2
4
6
8
10
12
14
16
18
300 Load−Unaware Liquid−First SmartPlace+Free
280 260 240 220 200 0
20
500
Total Power (kW)
Total Power (kW)
320
150
120 0
2
4
Outside Temperature (Celsius)
6
8
10
12
14
16
18
450
350 300 250 0
20
Load−Unaware Liquid−First SmartPlace+Free
400
2
4
Outside Temperature (Celsius)
(a) Total Power at 30% Loading
6
8
10
12
14
16
18
20
18
20
Outside Temperature (Celsius)
(b) Total Power at 50% Loading
(c) Total Power at 70% Loading
Figure 11: Total power consumption with Load-Unaware, Liquid-First and SmartCool at different loadings
240 220 200 0
2
4
6
8
10
12
14
16
18
20
Load−Unaware Liquid−First SmartPlace+Free
240 220 200 0
Outside Temperature (Celsius)
2
4
6
8
10
12
14
16
18
20
Outside Temperature (Celsius)
(a) 1 Row of Servers are Liquid Cooled
260
Load−Unaware Liquid−First SmartPlace+Free
250 240 230 220 210 0
2
4
6
8
10
12
14
16
Outside Temperature (Celsius)
(b) 2 Rows of Servers are Liquid Cooled
(c) 3 Rows of Servers are Liquid Cooled
us
260
260
ip t
Load−Unaware Liquid−First SmartPlace+Free
280
270
cr
300
Total Power (kW)
280
Total Power (kW)
320
Total Power (kW)
1
Cold Plate +Pump
2
3
4
Fan
pt
ed
M
Figure 11 shows the total power consumption of the three different schemes at different loadings with different outside temperature. We can see from the results that all the three cooling schemes achieve a low power consumption when the outside temperature is low, because all of them can use free air cooling. Compared with Load-Unaware and Liquid-First, SmartPlace+Free shows the lowest power consumption because it considers the heat recirculation among air-cooled servers when distributing workload. When the outside temperature increases, we can see that Load-Unaware is the first to have a jump in the power consumption curve among the three schemes. This is because LoadUnaware uses a fixed temperature threshold to decide whether to use free air cooling or not. The temperature threshold of Load-Unaware is determined with the data center running a 100% percent workload. Therefore, it is unnecessarily low for less workload such as 30%. Liquid-First is the second one to have a power consumption jump due to switching from free air cooling to CRAC cooling. It can use free air cooling more than Load-Unaware when the outside temperature is higher, because its temperature threshold is determined based on the realtime workload (e.g., 30%, 50% or 70%) rather than the maximum workload (100%). Hence Liquid-First saves power compared with Load-Unaware. SmartPlace+Free scheme is the last one to have the power consumption jump, because SmartPlace+Free optimizes the workload distribution among liquidcooled and air-cooled servers, while the two baselines concentrate the workload on a small number of air-cooled servers and result in some hot spots when air cooling is necessary. Those hot spots require a lower temperature of the supplied air for cooling and thus increase the power consumption. Therefore, SmartPlace+Free is the most power efficient scheme.
an
Figure 12: Power consumption comparison when different number of servers are liquid cooled with different schemes under different outside temperature.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Total Power (kW)
160
5.8. Impact of Number of Liquid Cooled Servers We also complete the related experiments to show the impact of number of liquid cooled servers. Figure 12 shows the total power consumption when different number of liquid cooled
(a) Server rack.
(b) Liquid cooling cold plate.
Figure 13: Our hardware testbed with 4 servers and a liquid cooling cold plate to take away CPU heat.
servers are liquid cooled at 50% loading. We can see that the number of liquid cooled servers will increase the free cooling utilization which means that for certain workload, free cooling can be used in a wider outside temperature in a data center with more liquid cooled servers. However, we can see that when free cooling is used, data center with more liquid cooled server consumes more cooling power. This is because liquid cooled servers still consumes chiller power and bump power to take away the heat generated by CPUs. 6. Hardware Testbed Results In this section, we show the experimental results from our testbed in a server room. Our testbed consists of four rack servers, marked in Figure 13a, and one liquid-cooling cold plate in Figure 13b, which can be installed on one of the four servers. We first explore the impact of the physical location of the liquid-cooled server on the temperature and recirculation conditions of the testbed. Before the experiment, we calculate the recirculation coefficient, hij that represents the percentage of heat recirculated from server i to server j, by measuring the
Page 13 of 15
36
34
32 server1
server2
server3
(a) Server outlet temperatures under four different server placement schemes.
38 36 34 32
400 SmartPlace+S Air−Spatial Subsetting Air−Uniform Workload Temperature Constraint
Server Power Estimated Cooling Power Estimated Total Power
300 200
Pump Power
100
30
28 server1
server4
500
40
Power (W)
Outlet Temperature (Celsius)
38
Random1 Random2 SmartPlace Random4
server2
server3
server4
(b) Server outlet temperatures under the proposed optimization scheme and two baselines.
0
a
b
c
(c) Power consumption comparison of a: SmartPlace+S, b: Air-Spatial Subsetting, and c: Air-Uniform Workload.
an
us
cr
tion installs the cold plate on server 1 and concentrates workload on server 1 and server 4, which have less air recirculation effect from other servers. We now compare SmartPlace+S with air cooling-only baselines under the given constraint, 37◦ C of the outlet temperature. At first, we try to use our optimization scheme to determine workload distribution and server on/off for the baselines, such that the advantages of optimized placement can be highlighted. However, no optimal solution can be found for the air cooling baselines that can meet the temperature constraint. Therefore, we evaluate two ad-hoc workload distribution schemes for air cooling. In the first scheme, Air-Spatial Subsetting, we concentrate all the workload to server 1 and server 4, the same servers selected by SmartPlace+S, and turn off the other two servers. The second air cooling scheme, Air-Uniform Workload, uniformly distributes all the workload to the four servers. Figure 14b shows the outlet temperatures of the four servers under the three different workload distribution and cooling schemes. We see that SmartPlace+S has the lowest average outlet temperature with no outlet temperature constraint violation. In contrast, the other two air cooling strategies cannot meet the outlet temperature constraints on some servers. This means that additional cooling power is needed to cool down the servers for them to meet the temperature constraint. To evaluate the power consumption of the three schemes, we now show their power breakdown results in Figure 14c. We estimate the cooling power according to the relation that coolingpower = serverpower/COP . We see that the estimated required cooling power for the two air cooling schemes is higher than that of SmartPlace+S. Air-Uniform Workload has higher server power because its uniform workload distribution keeps all the four servers on, while the other two schemes only turn on two servers. Moreover, the server power of SmartPlace+S is slightly higher than that of Air-Spatial Subsetting. This is because that although the workload distributions are the same (on server 1 and server 4 for these two schemes) the water pump in the cold plate shown in Figure 13b consumes a small amount of additional power. However, the total power consumption of SmartPlace+S is 9.18% less than Air-Spatial Subsetting and 44.4% less than Air-Uniform Workload.
pt
ed
M
inlet and outlet temperatures of the four servers when turning on one of the four servers at a time. We then use SmartPlace to determine which server should be installed with the liquid cold plate to minimize cooling power. Since we do not have a traditional CRAC in the server room, we minimize the outlet temperature such that the potential cooling power can be minimized. Figure 14a shows the outlet temperatures of the four servers when a selected server is installed with the cold plate. SmartPlace chooses server 3, while Random1, Random2 and Random4 represent the scenarios that server 1, server 2 and server 4 are chosen to install the cold plate, respectively. The four servers are running the same workload in all the experiments. For the reason that air flow in the server room and heat recirculated from other racks will impact the server temperatures of the four servers, server outlet temperatures of these four servers are different even when all of them are air cooled with the same workload. Figure 14a shows that with different server being equipped with liquid cooling, the server temperatures are different due to air recirculation. We can see that SmartPlace leads to not only the lowest highest server outlet temperature, but also the lowest average temperature of the four servers. This is because SmartPlace installs the cold plate on server 3, the server that can reduce the most recirculation effect of the whole system. With a lower highest server outlet temperature, the required cooling power is the minimum of the four different placement schemes. In the second experiment, we consider all the three knobs, i.e., workload distribution, turning on/off servers, and liquidcooled server replacement, to minimize the total power consumption of the system. Because we do not have a traditional CRAC in our server room, we use the ambient air temperature as the CRAC supply temperature which is 27◦ C. We set 37◦ C, 10◦ C higher than the supply temperature, as the temperature constraint, which is the same as in the simulation. We set the average workload of the whole system as 50%. Since the CRAC supply temperature is a constant in our system, the COP is a constant. As previously discussed CoolingP ower = ServerP ower/COP , we can get T otalP ower = ServerP ower + (ServerP ower/COP ) which means the total power is determined by the server power if a scheme can meet the temperature constraints. We redesign SmartPlace+S to determine the workload distribution and which server to install the cold plate for minimizing the server power. Our solu-
ip t
Figure 14: Hardware testbed results show that the proposed SmartPlace and SmartPlace+S schemes outperform the baselines.
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Outlet Temperature (Celsius)
13 40
Page 14 of 15
14
References
pt
ed
ip t
cr
M
[1] L. Li et al., “Placement optimization of liquid-cooled servers for power minimization in data centers,” in IGCC, 2014. [2] L. Stapleton and others , “Getting smart about data center cooling ,” http://www.hpl.hp.com/news/2006/oct-dec/power.html. [3] J. Koomey et al., “Estimating Total Power Consumption by Servers in the U.S. and the World,” http://sites.amd.com/de/ Documents/svrpwrusecompletefinal.pdf, 2007. [4] D. Chemicoff and others, “The Uptime Institute 2012 Data Center Survey,” www.zdnet.com/blog/datacenter/. [5] Google.DataCenters, “From paper mill to data center,” 2013. [Online]. Available: www.google.com/about/datacenters/locations/hamina [6] M. Iyengar et al., “Server liquid cooling with chiller-less data center design to enable significant energy savings,” in SEMI-THERM, 2012. [7] CERN Accelerating science, “Data Centre,” http://informationtechnology.web.cern.ch/about/computer-centre. [8] N. El-Sayed et al., “Temperature management in data centers: Why some (might) like it hot,” in SIGMETRICS, 2012. [9] CALIFORNIA ENERGY COMMISION, “Summertime energy-saving tips for businesses .” [10] F. Ahmad et al., “Joint optimization of idle and cooling power in data centers while maintaining response time,” in ASPLOS, 2010. [11] J. Moore et al., “Making scheduling cool: Temperature-aware workload placement in data centers,” in USENIX, 2005. [12] S. Li et al., “Joint optimization of computing and cooling energy: Analytic model and a machine room case study,” in ICDCS, 2012. [13] D. Meisner et al., “Powernap:eliminating server idle power,” in ASPLOS, 2009. [14] R. Sullivan et al., “Alternating cold and hot aisles provides more reliable cooling for server farms.” In uptime Institute, 2000. [15] M. Iyengar et al., “Energy efficient economizer based data centers with air cooled servers,” in ITherm, 2012. [16] A. Coskun et al., “Energy-efficient variable-flow liquid cooling in 3d stacked architectures,” in DATE, 2010. [17] A. Sridhar et al., “System-level thermal-aware design of 3d multiprocessors with inter-tier liquid cooling,” in THERMINIC, 2011. [18] B. Rubenstein et al., “Hybrid cooled data center using above ambient liquid cooling,” in ITherm, 2010. [19] D. Sujatha et al., “Energy efficient free cooling system for data centers.” in CloudCom, 2011. [20] B. Gebrehiwot et al., “Cfd analysis of free cooling of modular data centers.” in SEMI-THERM, 2011.
us
We have presented SmartPlace, an intelligent server placement algorithm that deploys liquid-cooled servers with workload distribution to minimize data center cooling power. SmartPlace also takes into account putting selected servers into sleep for jointly minimized cooling and server power in the entire data center. As today’s data centers have started to adopt emerging cooling techniques, i.e., liquid cooling and free air cooling, we have extended SmartPlace (called SmartPlace+Free) to intelligently coordinate them with the traditional CRAC air cooling and dynamic workload distribution for jointly minimized cooling and server power. We compare SmartPlace with a stateof-the-art cooling optimization solution for two data centers with 1,280 and 10,240 servers, respectively. The results show that SmartPlace achieves up to 26.7% (15% on average) less total power consumption with dynamically guaranteed application response time. Our hardware testbed results also demonstrate the effectiveness of SmartPlace. Finally, we have shown that SmartPlace can help a data center gain a full return on the capital investment of liquid cooling in less than 3 years.
[21] W. Zheng et al., “Exploiting thermal energy storage to reduce data center capital and operating expenses,” in HPCA, 2014. [22] Q. Tang et al., “Thermal-aware task scheduling for data centers through minimizing heat recirculation,” in Cluster Computing, 2007. [23] R. Sharma et al., “Balance of power: dynamic thermal management for internet data centers,” Internet Computing, IEEE, vol. 9, no. 1, 2005. [24] E. Pinheiro et al., “Load balancing and unbalancing for power and performance in cluster-based systems,” Technical Report DCS-TR440,Department of Computer Science,Rutgers University, 2001. [25] Y. Chen et al., “Integrated management of application performance, power and cooling in data centers,” in NOMS, 2010. [26] W. Zheng et al., “Data center sprinting: Enabling computational sprinting at the data center level,” in ICDCS, 2015. [27] L. Li et al., “Coordinating liquid and free air cooling with workload allocation for data center power minimization,” in ICAC, 2014. [28] J. Grimshaw et al., “Data center rack level cooling utilizing water-cooled, passive rear door heat exchangers (rdhx) as a cost effective alternative to crah air cooling,” MAKING DATA CENTERS SUSTAINABLE, 2011. [29] O. Allen et al., “Black box network service,” Retrofitting with passive water cooling at the rack level, 2011. [30] M. Richard et al., “Technology/clean technology,” 2011. [Online]. Available: www.treehugger.com [31] D. Hwang et al., “Energy savings achievable through liquid cooling: A rack level case study,” in ITherm, 2010. [32] G. Gebrehiwot et al., “Cfd analysis of free cooling of modular data centers,” in Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), 2012 28th Annual IEEE, 2012. [33] D. Sujatha et al., “Energy efficient free cooling system for data centers,” in Cloud Computing Technology and Science (CloudCom), 2011 IEEE Third International Conference on, 2011. [34] G. Lyon and others, “Advanced Liquid Cooling,” Processor.com. [35] W. Huang et al., “Tapo:thermal-aware power optimization techniques for servers and data centers,” in IGCC, 2011. [36] Q. Tang et al., “Sensor-based fast thermal evaluation model for energy efficient high-performance datacenters,” in ICISIP, 2006. [37] J. Moore et al., “Making scheduling cool: Temperature-aware workload placement in data centers,” in USENIX, 2005. [38] ——, “Weatherman:automated,online and predictive thermal mapping and management for data centers,” in ICAC, 2006. [39] O. Allen et al., Probability,statistics and queuing theory with computer science applications. Academic Press, 1990. [40] G. Urdaneta et al., “Wikipedia workload analysis for decentralized hosting,” Elsevier Computer Networks, vol. 53, no. 11, 2009. [41] X. Wang et al., “SHIP: Scalable hierarchical power control for large-scale data centers,” in PACT, 2009. [42] Fluent, “Computational fluid dynamics (CFD) software by Ansys Inc.” http://www.caeai.com/cfd-software.php. [43] C. Gau et al., “Implementation and testing of a branch and bound based method for deterministic global optimization: Operations research applications,” Kluwer Academic Publishers, vol. 15, no. 5, 2003. [44] ENERGY.GOV, “Energy efficiency renewable energy,” 2012. [Online]. Available: http://www.eere.energy.gov/ [45] THERMACORE, “Cold plate liquid cooling solutions: High efficiency for high power,” 2013. [Online]. Available: www.thermacore.com
an
7. Conclusion
Ac ce
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65
Page 15 of 15