Development of an energy evaluation and design tool for dedicated cooling systems of data centers: Sensing data center cooling energy efficiency

Development of an energy evaluation and design tool for dedicated cooling systems of data centers: Sensing data center cooling energy efficiency

Energy and Buildings 96 (2015) 357–372 Contents lists available at ScienceDirect Energy and Buildings journal homepage: www.elsevier.com/locate/enbu...

5MB Sizes 1 Downloads 49 Views

Energy and Buildings 96 (2015) 357–372

Contents lists available at ScienceDirect

Energy and Buildings journal homepage: www.elsevier.com/locate/enbuild

Development of an energy evaluation and design tool for dedicated cooling systems of data centers: Sensing data center cooling energy efficiency Jinkyun Cho a,∗ , Joonyoung Yang b , Changkeun Lee b , Jinyoung Lee c a

Construction Technology Division, Samsung C&T Corporation, Seoul 135-935, South Korea M&E Engineering Center, Samsung C&T Corporation, Seoul 135-935, South Korea c R&D Institute, Hanil Mechanical Electrical Consultants (HIMEC) Ltd., Seoul 150-103, South Korea b

a r t i c l e

i n f o

Article history: Received 5 June 2014 Received in revised form 9 March 2015 Accepted 18 March 2015 Available online 26 March 2015 Keywords: Data center Dedicated cooling system Energy efficiency Energy evaluation tool Energy simulation Power usage effectiveness (PUE)

a b s t r a c t Data centers are approximately 50 times more energy-intensive than conventional office buildings, where ICT equipment consumes about 50% of total electricity and cooling energy is roughly 35% or more of total energy use. The main objective of this study is defined by energy analysis process, numerical studies, and simulation studies to assess each technical component’s influence in order to create energy-optimized data centers. This includes dedicated cooling system and design conditions that were previously not generally used, how these affect energy efficiency, and how the prioritization of system selection is derived. Energy simulation programs have become more sophisticated through constant maintenance over many years. Those who have attempted to model data center design loads and annual energy consumption have found that the suite of available commercial programs for occupied office spaces is not easily adaptable to data center projects. Data centers have very simplistic load-contributing components but highly complex load growth and system growth modeling challenges. This paper addresses modeling needs that are unique to data centers, including using both hourly energy simulation programs, as well as more simplistic approaches. The developed data center energy evaluation methodology and program should be used by engineers and designers to assess the effectiveness and economic benefits of cooling systems. © 2015 Elsevier B.V. All rights reserved.

1. Introduction Due to the increase in global energy cost, electrical demand, environmental problems, and other economic pressure, lowenergy data centers are forming a high trend in the information and communication technology (ICT) industry. Conventional data center’s issues, which prioritized the focus on stability, are not valid any more. Data centers are energy-intensive facilities, which consume approximately 1.5 to 3.0% of the energy produced in industrialized countries [1,2]. As power density of ICT equipment has increased significantly, energy consumed by cooling systems has also increased to about 35% of the total data center energy use [3]. In many current data centers the actual IT equipment uses only half of the total energy [4] while most of the remaining part is required for cooling and air management resulting in poor power usage effectiveness (PUE) values. Furthermore, conventional

∗ Corresponding author. Tel.: +82 2 2145 6999; fax: +82 2 2145 7660. E-mail address: [email protected] (J. Cho). http://dx.doi.org/10.1016/j.enbuild.2015.03.040 0378-7788/© 2015 Elsevier B.V. All rights reserved.

cooling methods to remove heat from the servers are considered to be insufficient for stable operation of ICT equipment. Large energy needs and significant CO2 emissions caused that issues related to cooling, heat transfer, and ICT infrastructure location are more and more carefully studied during planning and operation of data centers. Currently, if a data center is planned for new construction or remodeling, client’s demand of low energy data center is increasing due to huge demand of energy costs during operation. Therefore, it is important to propose energy-optimized data centers corresponding to the local climate early in design (based on the limited information). As a result, various commercial energy simulation programs were utilized. However, existing energy analysis programs are optimized for residential and general buildings. Thus, it is not easily adaptable for the energy analysis in data center, where energy consumption pattern is completely different. Therefore, design and evaluation methodologies to increase efficiency of air management, cooling plant, and low energy systems in high density data center are inevitable. In order to optimize a design or configuration of data center there need appropriate methodologies and tools evaluating how much computation or data processing

358

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

Nomenclature P Q L W S T ER tr t* t ttr ER ERmax ERmin

power (energy) consumption (kW h) thermal demand (kW h) power (energy) losses (kW h) parameters characterizing performance of specific types of cooling equipment parameters characterizing performance of specific types of cooling equipment temperature rate of heat removal from room the air temperature in room at time  thermostat set point temperature at time  the throttling air temperature range rate of heat removal from room at time  maximum rate of heat removal in the throttling range minimum rate of heat removal in the throttling range

Subscripts and superscripts  time room r tr throttling range thermostat set point t CG cooling generation primary PRI AT air transport WT water transport MOT motor transmission TR VEN ventilation information and communication technology ICT L lighting

can be done within given power and how it affects temperatures and airflows within data center. Therefore, there is a need for simulation tools and models that approach the problem from a perspective of end users and take into account all the factors that are critical to understanding and improving the energy efficiency of data centers, in particular, IT sever characteristics, applications, and cooling. The objective of this study is to develop a data center energy performance evaluation tool that can provide prioritization of energy-impacted cooling system application and help derive reasonable alternatives in planning data centers to the engineers or the designers suitable for various climate zones in the early design stage. It is critical in this study to develop the tool that evaluates the relative impact of component technologies through a set of design process, especially, to optimize energy efficiency. To accomplish this, we studied practical technologies of data center

that greatly improved energy efficiency compared to the existing data centers, analyzed applied key technologies, and the correlation between each technology and energy impact. It is important to understand the relationship between mutually dependent factors and associated technologies based on the elements in data center cooling systems. Then, energy evaluation methodology of data center’s cooling systems was developed. From the step for selection of basic cooling equipment, considered the impact of related technologies, the input and output variables were set, which variables used to set the actual practices derived from the results of computational fluid dynamics (CFD) analysis of the effectiveness and efficiency of air management system [5], and engineering data provided by the equipment manufacturer. The developed evaluation tool maximized in reflecting the uniqueness of the data center, where calculation algorithms of small-impact cooling loads are simplified and an element that largely impacts the ICT and cooling energy, are reflected by algorithm in consideration to their interrelationship. And to prove the correctness of the calculations and methodologies, the data center energy evaluation tool was verified by comparing with TRNSYS. The research methodology and procedure are shown in Fig. 1.

2. Energy performance of data center 2.1. Energy performance considerations There are about 23,000 data centers in the world. In 2014, the market is expected to grow to about $343.4 billion [6]. One large internet data center usually consumes 10–20 MW of electricity. Looking at the trend in the past 10 years, energy consumption of data center doubled every four years; so it is expected that customers will be concerned about increase in power consumption and heat removal [7]. Also, data centers are forty times more energyintensive than conventional office buildings [8]. As shown in Fig. 2, demand-side systems, which include processors, server power supplies, auxiliary server components, storage, and communication equipment, account for 52% of total consumption. Supply-side systems include uninterruptible power supply (UPS), power distribution unit (PDU), cooling, lighting and building switchgear, and they account for 48% of total consumption [9]. In data centers, it is preferred that fully efficient ways in all areas are maintained, however, it is difficult to control ICT-related energy. In Non-ICT factors, the cooling system accounted for the high energy consumption is the most important factor for energy-optimized data centers. Among the more established metrics for quantifying and comparing data center energy efficiency is the PUE, which is defined as the ratio of total power drawn by a data center facility divided by the power used by the ICT equipment in that facility: PUE =

Total facility power (1) = ICT facility power (3)

Fig. 1. Research methodology and procedure.

(1)

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

359

Table 1 Issues of energy simulation of data centers. Internal loads

(1) Internal loads of data centers are very high density (>3.0 kW/square meter) • Though the loads change over time, they don’t change on a daily basis • Data center loads are predictable and easily measured by monitoring equipment (can’t do that with people and envelope loads, which change through each day of operation) (2) Internal loads of data centers are 100% sensible • Calculation of the psychrometrics of the load process is irrelevant • This is another area frequently prone to errors and variation in hourly energy simulations • Ventilation, humidity control, and pressurization can be addressed with small dedicated outdoor air system (DOAS) (3) Internal loads of data centers increase over time • Equipment is sized for the endpoint of each design module (may take years to achieve the design load) • Contrast to occupied spaces

Sizing of equipment

Tier levels of data centers force the issue of what design values to use for sizing of equipment • Higher tier levels may require sizing based on 20 year high for local climate • Occupied spaces are required by ASHARE Standard 90.1 to use the 0.4% design conditions Envelope calculations are too complex in proportion to its contribution to load • Data centers have no windows or skylights • Mass of walls and roof are usually large; daily variation of envelope loads, small as they may be, is dampened by the thermal mass in the structure • For typical data center, envelope loads are 1–2% of the total building load; it’s almost a non-issue. • The only need for insulation is to assure that inside surface temperatures to not drop below the room dew point

Fig. 2. Analysis of typical data center energy consumption.

Various industry and government leaders agree that PUE is the preferred energy efficiency metric for data centers [10]. Calculations are shown in Eq. (1), where PUE cannot be lower than 1.0 [11]. The ideal PUE of a data center is 1.0, which means that all energy consumed by the facility is used to power ICT equipment. Specifically, as shown in Fig. 3, in the total power consumption, electricity purely used by ICT equipment except for cooling, lighting, power loss etc. should be measured to rate the effectiveness. There are many reasons why data centers should be treated differently from other types of occupied buildings. Though performing the energy simulation as proposed is a legitimate approach, it complicates the analysis and subjects the design to more error. Hourly energy simulation programs (such as TRNSYS, Energy Plus eQuest etc.) have become more sophisticated through constant maintenance over many years. Those who have attempted to model data center design loads and annual energy consumption have found that the suite of available commercial programs for occupied office spaces is not easily adaptable to data center projects. Data centers have very simplistic load-contributing components but highly complex load growth and system growth modeling challenges. The differences in energy simulation between occupied buildings and data centers, and the impact to the associated energy calculations are shown in Table 1 [12].

Envelope calculations

Supply air temperature

Selection of supply air temperature for data centers is very high (approaching the upper end of the TC 9.9 Thermal Envelope) • This eliminates the need for much of the psychrometric analysis in the simulation portion of the compute model • The cold aisle should be at the supply air temperature when containment is used • For modeling of occupied spaces, the room air temperature is close to the return air temperature; the supply air cannot be introduced directly to the occupied space without some level of mixing and/or induction of room air • For modeling of data center spaces, the supply air temperature is the cold aisle temperature; the return air temperature is the hot aisle temperature • There is therefore no zone thermostatic control

Zoning

(1) There are no perimeter zones with separate thermostatic control • Simulation programs become prone to errors when considering individual VAV systems with reheat in perimeter zones • Entire data hall is a single zone (2) There are no heating loads • There is no value to considering the perimeter heating

2.2. Relationship between dedicated cooling system design and energy efficiency

Fig. 3. Power usage in the data center.

Through the previous studies of data center energy efficiency [13–22], the component technologies of energy efficient dedicated cooling systems can be derived. To optimize the energy performance of air management and cooling plant system, each of the technologies should be evaluated in terms of contributing to the reduction in energy, cost, easy implementation, and availability

360

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

Fig. 4. Relationships and interactions between data center cooling system conditions.

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

361

Table 2 Energy modeling program comparison. Items

Developer

Calculation engine

Energy modeling program TRNSYS

Energy plus

eQuest

DCeET

Solar Energy Laboratory, U of Wisconsin-Madison TRNSYS 17

U.S. Dept. of Energy

James J. Hirsch & Associates

Energy Plus v7.1

DOE 2.2

Custom Spreadsheets Microsoft Excel (typically) Alternative calculation method Best suited for modeling systems energy, not building envelope/solar effects on load Custom Ongoing

Types of systems/spaces able to be modeled

Developed for modeling energy effects of building envelope, lighting, and simple HVAC systems in comfort conditioning—driven buildings

Baseline system type Development status

Custom Complete, with continuous updates

Custom Under development, no publicly available graphic user interface yet

of relevant equipment. Therefore, the component technologies with high contribution to reduce energy were reviewed, and these technologies were classified into three main categories. Their crossimpact was studied and applicable items were proposed separately from them. The effect from the energy perspective, as the operation time and level of the air management systems (computer room air handling (CRAH) units, fans, etc.) is determined according to the cooling load, the capacity of cooling plant system, such as chillers, pumps, is determined depending on the capacity of the air management systems, so that it can be mentioned that there is a series of correlation between the main items. As a result, the main categories are divided into air management system sector, which includes load conditions, and the cooling plant system sector related to central chilled water and hydronic distribution systems. Step 1, setting priorities: for rapid decision-making of energy-optimized data center, component technologies were arrayed with a priority of considerations when selecting cooling systems. The criteria are; (1) technology of higher impact in energy-saving, (2) cost-effective technology in the case similar levels of energy savings, and finally, (3) technology of measurable effects on a quantitative way was prioritized. Step 2, setting applicable standards (grade level): each energy efficient technology is categorized and arranged to be easily facilitated with certain changes or additions for developing new technologies in the future. In addition, the selection criteria were classified according to each level of application. Step 3, relationship between each item: in case an item may affect one or more items to be determined the level of, all items associated with the item can be considered and shown at the same time. For example, in the environmental conditions, if the planned indoor temperature and humidity range to be relaxed, the supply air temperature of CRAH units can be set upward. Again, when the supply air temperature is raised, then the chilled water temperature is also affected and supply chilled water temperature to CRAH unit can be set higher. For building the low-energy data center technologies selection systems, component technologies derived through the case studies were classified by priorities, applicable criteria and relationships between them. The selected component technologies and systems are shown in Fig. 4. First, the prototype model was set to be the most common data center, the level selection system with a total of 10 detailed technologies and the relationship of the technologies was investigated. In energy consumption, correlations between main technologies of air management and cooling plant system, there are individual components such as fan and pump that does not affect or interact with each other, and are cross-linked technologies that affect each other. Main cross-linked technologies are environmental conditions, ICT equipment placement, air-side economizer and water-side economizer, CRAH unit’s supply air temperature,

Custom Complete, with continuous updates

chilled water supply temperature and chiller efficiency. These are the key technologies and big influent factors for energy-optimized data center. 3. Development of datacenter energy performance evaluation tool 3.1. Need for energy simulation methodologies for data centers Recently, when a data center is planned, client’s need of lowenergy data center is increasing due to huge demand of operating energy costs. It is important to propose energy-optimized centers data corresponding to the local climate in early design stage. Various commercial energy simulation programs are utilized but existing energy analysis programs are optimized for occupied general buildings. How are data center simpler than general buildings? There are no occupancy scheduling and significant transient conditions, negligible solar incidence and envelope affects. This makes energy modeling much simpler for data centers, than modeling office buildings. Furthermore, it takes a considerable amount of time to make a simulation model as it consists of complex mechanisms of much architectural information, such as the cooling load of building envelope, solar heat gain load and heat storage of building mass which is closely related to the region and the climate. However, such architectural elements are only 3–5% of the data center’s total load and can be neglected within margin of error. For most data centers, here are the only factors that cause energy use to vary: The amount of equipment powered or installed in the space (the percent populated variable), and the outdoor weather affecting air- and water-side economizer system, cooling tower capacity, need for humidity, percent outside air and reset conditions [12]. For data centers of which 95% of cooling load is heat generation from ICT equipment is inside the building [23], existing energy analysis programs are not appropriate to evaluate the energy consumption. Additionally, cooling system application, such as aisle containment, air-side economizer, water-side economizer, CRAH unit, etc., to remove ICT’s heat is not easy. Data center engineering tools (such as 6 Sigma DC, Tile-flow, etc.), are developed recently, but are mainly used as CFD analysis to evaluate the air distribution in the IT server room and heat removal performance. These types of engineering tool may partially analyze energy performance, but requires detailed IT server modeling and is limited in time and function to provide analysis for the entire data center. This developed data center energy performance evaluation tool (DCeET) is focused on effectively analyzing cooling system application to create energy-optimized data centers in the early stage of design. It is an easy and fast tool, simplified energy evaluation tool, which can

III

II

A fully enclosed cold aisle scheme is modeled to be identical to a fully enclosed hot aisle scheme, from the standpoint of temperatures, humidity, and total static pressure drop. Load density is actual measurable load density at full build-out, not designs density including a safety factor, and is based on total data center floor area. C Airside delta-T does not include fan motor heat. Delta-T is the temperature difference between the supply air leaving the CRAC/CRAH and the air returning to the CRAC/CRAH. D Humidity Control Range: “Thermal Guidelines for Data Processing Environments” [19], The minimum dew point temperature and maximum relative humidity shown in this table is the “Recommended” range for Class 1 and 2 facilities. The values apply to the air entering the computer equipment. E Airflow efficiency metric was created based on baseline static pressure drop and baseline fan, drive, and motor efficiencies for a 15 hp motor. F Determined based on a survey of CRACs and CRAHs from prominent manufacturers operating at the baseline static pressure drop. B

A

– – – – 10

30











15–30 15–40 395 15 18–22 10–35 10 0

40–55 10 13–16 10

20–25 Max Min

Hot aisle/cold aisle open Hot aisle/cold aisle fully enclosedA In-row cooling solution I

0

20–80

40

15–30 15–40

685 CRAC CRAH

480 45

Cooling coil capacity per unit at baseline conditionsF [USRT] Operating CRAH or CRAC airflow capacityF [m3 ] Fan airflow efficiency metricE [m3 /kW] RH set point and toleranceD [%] Operating air side delta-TC [◦ C] Operating supply air temp. [◦ C] Return air dry bulb temp. set point [◦ C] Design IT load density at full build-outB [kW/rack] Name ID

Table 3 Space design conditions for air management and cooling.

710

J. Cho et al. / Energy and Buildings 96 (2015) 357–372 Total static pressure [Pa]

362

Fig. 5. A development workflow in the data center energy evaluation tool.

calculate energy consumption in the data centers with basic information. The DCeET can be transparent, simple approach and addresses the most significant loads only, more easily incorporates UPS loads. It also can easily adapt to part load performance (applicable to fans, chillers, pumps, cooling towers, UPS, etc.). This study considers the uniqueness of the data center energy characteristics and developed easy-to-use and accurate energy performance assessment tool dedicated for data centers. See Fig. 5 for the methodology of the development of this tool.

3.2. Methodologies of energy performance evaluation For development of DCeET, an investigation in energy calculation equations to be used must be preceded. In order to calculate the power consumption of cooling systems, cooling load and power consumption of equipment should be calculated. When calculating cooling load for general buildings occupied 8–10 h per day, largely affected by building envelope, the transfer function method (TFM), radiant time series method (RTS) and the heat balance method etc. of considering the effect of such structures regenerative heat transfer algorithm are generally used. However, in the case of data centers where heat generation from ICT equipment is absolutely dominant and operated 24/7/365, it is not necessary to consider the effect of these small heat accumulations. As a result, in calculating the data center’s cooling load, heat balance equation is utilized with simple steady-state heat transfer of ICT equipment. For a room unit of a data center, the total cooling load can be calculated by simple sum of the loads of independent components. Where, because typical architectural elements, except for cooling loads from ICT equipment, such as lighting, people, envelope, heat flux, solar radiation, and infiltration are less than 5% of the total cooling load, some constant cooling load values per unit area of typical office building from conventional statistics was applied. In data centers, space is planned without any windows, so that no solar radiance is induced directly. And CRAH unit room is located in the perimeter so that the inside of the server room is thermally affected very little by outside climate conditions and is constituted thermal buffer space. Therefore, the thermal load through the building envelopes including incident solar radiation can be negligible. ASHRAE TC 9.9 presented energy saving techniques for large-scale dedicated data centers; they also exclude architectural elements of low impact from the priority [21]. The currently dominant modeling programs are listed in Table 2 with key notes about how they work and for what type of modeling they are best suited. Programs other than the ones listed here may be used for incentive calculations, but the calculation approach must be approved by the utility administering the customized incentive program prior to performing the incentive calculations.

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

363

Fig. 7. Low processor activity does not translate into low power consumption. Fig. 6. Typical electric power and accompanying heat flow in a data center.

3.2.1. Size categories for data centers Data centers are spaces specifically designed to accommodate dense arrangements of computer equipment and associate networking, telecommunications, storage and auxiliary equipment required to store, process, manage and disseminate data and information. A data center may include redundant power supplies, backup power equipment, and HVAC equipment. Small data centers are facilities provided with up to and including 1 MW total design IT load. Large data centers are facilities provided with more than 1 MW and high density data centers are for rack load densities greater than 10 kW. 3.2.2. Cooling and ICT load estimation Space design conditions for data centers depend on the actual load density of the IT equipment. The appropriate air management and cooling for a data center can be determined using Table 3 [24]. Data centers with rack load densities greater than 10 kW/rack require alternative cooling strategies. The cooling system for design scheme III is an in-row cooling solution. An in-row cooling solution is defined as a system which cools one rack or one aisle of equipment only and is physically located in the row. An inrow solution requires running chilled water or refrigerant to each rack or aisle. This program does not apply to data centers with a design IT load of larger than 10 kW/rack. Cooling load estimates obtained using the conventional data center cooling load estimation method, which considers the power demand per unit area (kW/m2 ) depending on the thermal characteristics of various types of ICT equipment characteristics and data center space usage [25]. The ICT components in a data center, such as servers, PDUs, and UPSs, generate a significant amount of heat and consume a significant amount of power [26], depending on the IT workload [27]. Fig. 6 shows the power flow of each component and accompanying thermal flow inside a modular data center [28]. Some of the power supplied from the electricity grid to the UPS is lost at some points through heat dissipation during the storage and transmission process. Power is distributed to each server through the PDU, and this distribution also involves some transmission power loss. Each server consumes the power delivered to it to process its IT workload, and this process also generates heat. Envelope loads are not typically modeled for data centers. The full build-out IT load and load density are used to determine the system type and capacity. IT servers are the greatest sources of heat generation in a data center, and their power consumption varies depending on the IT workload, hardware, and software [29]. Even when the IT server processors are idle, most servers consume 75–90% of the peak load power (Fig. 7). Hence, heat is continuously generated from standby power and because of this the partial load approaches 80% of the peak load [30]. This study does not address the energy efficiency of ICT equipment. As shown in Fig. 8, the power supplied to the UPS is mostly transmitted to the PDU;

Fig. 8. Heat and power flow of power transmit units in a data center.

however, a portion of the power is lost during storage and transmission. The power consumption of the PDU and UPS, including the IT load operation, was calculated using the following equations [31]: PICT = PServer + LPDU + LUPS + LSwitchgear

(2)

LICT = LUPS + LSwitchgear

(3)

QICT = (PServer + LPDU ) × 0.8

(4)

Once the power density of data center’s white spaces is determined, then the amount of heat flux for removing the thermal load of should be calculated. The heat removal characteristics of cooling system can be expressed by simple linear equation as shown in Eq. (5). This linear relationship is established only when the room temperature is within a controllable range of the system. When room temperature is out of the controllable range, heat removal rate would be the value of ERmin or ERmax . As a function of room temperature, the slope of the straight line S, representing the heat removal rate is Eq. (6), W , the intercept of the line is Eq. (7) [32]. ER = W + (Str ) S=

(5)

ERmax − ERmin ttr

W =

(ERmax + ERmin )



2 − St∗t



(6) (7)

3.2.3. Cooling system’s energy evaluation In data centers, cooling system is a critical activity in terms of optimizing the control settings to reduce the energy consumption, improving the system efficiency, and preserving the best environment for the ICT equipment. It could be defined as the state of mind which expresses satisfaction with the surrounding environment [19]. Global energy consumption of a cooling system may be obtained by the summation of the energy use of all its energy consuming devices. However, between global and equipment levels, two additional aggregation levels can be distinguished: subsystems and services (Fig. 9). Three main subsystems (cool generation, water transport and air transport) and three kinds of energy flows

364

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

Fig. 9. Data center cooling systems energy use aggregation levels.

Fig. 10. Energy subsystems in data center cooling systems.

(thermal demands, consumptions and losses) can be identified for the energy analysis of data center cooling systems (Fig. 10). Thermal demands quantify heat transfers: at conditioned spaces, to distribution air flow, at coils or by primary equipment for cool generation. Consumptions refer to energy end use of conversion devices, mainly thermal generators and fluid movers. Energy losses are due to wasted energy or equipment inefficiencies. Basic energy conservation equations in each of these subsystems are written in terms of final energy, making use of thermal loads (Q), energy losses (L), energy (power) consumptions (P). Plant system consists of set of equipment responsible for the cool generation. The heat extraction from the cold source requires the consumption of a given amount of energy. The energy rejected to the environment is positive and higher than the absolute value of heat extraction from the cold source:

An air-conditioning system is responsible for air distribution throughout the building. The only energy use involved is that of fans. If fan motor is placed in the air stream, all the consumed energy is thermally degraded and can be called fan heat:

PCG = QCG + LCG

PCooling = PCG + PWT + PAT

(8)

The water transport system is made up of hydraulic equipment intended to drive the primary fluid, usually water, from the plant system to the water coils of the air-conditioning system. Heat added or extracted by plant systems to primary fluid is usually referred to as primary load (negative for cool generation). Thus the balance equation for the transport system is: QPRI = QCOIL + LWT − PWT

(9)

There are two kinds of water transport losses that in the water distribution network and by inefficiencies in the pumping equipment. The difference between water transport electric consumption and pump losses is thermally degraded in the fluid and can be referred to as pump heat: QWT = PWT + LPUMP = PWT − (LMOT + LTR )

(10)

QAT = PAT

(11)

Losses have either a thermal character or are caused by air leaks or infiltrations. From the mass and energy balances in this subsystem the following equations can be derived: QCOIL = QICT + QVEN − QAT + LAT

(12)

Global energy consumption of a data center cooling system may be obtained by the summation of the energy use of all its energy consuming devices: (13)

PUE, which is an indicator for data center’s total energy efficiency, is related to ICT cooling energy, UPS power loss, switchgear/electricity distribution loss, lighting, and other non-ICT cooling energy (Figs. 3 and 8). PUE =

PICT + LICT + PCooling + PL + Petc. Total facility power (14) = PICT ICT facility power

Alternative compliance path allows a simpler method using PUE, but this doesn’t give the designer a true, energy-based aid for making design decisions. Fig. 11 shows the basic structure and process of a data center system energy evaluation. It is also common practice to consider another aggregation level for cooling systems, related to the services provided, typically IT server cooling and air distribution. Two air-side economizer and one water-side economizer alternatives applicable to data center model

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

365

Fig. 11. The basic structure and process of a data center system energy evaluation tool.

were analyzed in this program. A direct air-side economizer system works by introducing outdoor air (OA) when the OA temperature is cooler than the supply air (SA) temperature. Indirect air-side economizer systems preconditions the RA returned from the data center through sensible heat exchange with the OA to reduce the cooling coil load. This system does not supply OA directly to the data center. Two types of sensible heat exchangers, a heat pipe and a heat wheel were considered. Fig. 12 shows the operation and energy

evaluation algorithm of the direct and indirect air-side economizer systems described above [28]. 3.3. Overview of the energy performance evaluation tool When using the DCeET, it is important to input information that the user know and to make output that user needs to obtain. By the energy efficiency technologies in Section 2 and their

Fig. 12. Direct and indirect air-side economizer systems’ operation and energy evaluation algorithm.

366

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

(3) Consider ASHRAE’s data center operating temperature and humidity ranges. (4) Estimate the effect of temperature reset sequences, on IT equipment life. (5) Show how energy costs accrue, even if equipment is deferred. (6) Show adjustments affect peak PUE. (7) Include recommended maintenance hours in annual costs. (8) Compare baseline and proposed UPS efficiencies. Fig. 13. The various set conditions reflect the different efficiency of various cooling systems.

co-relationships, user requirements are derived and input/output parameters were set from these as shown in Fig. 11. In calculating energy consumption of data center cooling systems, the degree of interrelated component technologies are considered, if the user’s input is changed, then affected variables of items are made to be changed accordingly. The overall relationship between the configuration conditions which the users select or enter and the calculated item is shown in Fig. 13. Typically, the indoor environmental condition is set from the average indoor temperature and humidity, but in order to remove the heat from the ICT equipment, the air temperature supplied to the IT server and humidity conditions is controlled variables. Therefore, considering recirculation and bypass, conditions of the supply air are to be set within the room set conditions to be determined. The airflow rate is determined by ICT equipment’s own fans. Therefore, it is assumed if the supply temperature is changed, then the return air temperature also would be changed to maintain T (average 10 K) and the airflow rate would not be changed. Power consumption of chiller is made to vary depending on its efficiency (COP), power consumption of cooling tower is made to vary depending on the type of it automatically with the equations provided by the manufacturers and entering common data. Power consumption of fans and pumps are calculated by airflow rate, head loss, brake horse power from the efficiency of selected type of motor. The head loss is calculated with the pipe or duct length entered by the user and friction loss of attached fittings. CRAH unit fan’s static pressure was applied to be calculated differently depending on the type of filters, added static pressure loss in case of air-side economizer cycle to simulate the changes due to the introduction of outside air. Fig. 14 shows the overall input and output variables and calculation procedure of the DCeET. A better approach would be to use a spreadsheet using TMY3 data, which allows full 8760 h/year simulations. Fig. 15 shows an example DCeET structure and these energy models for a data center should: (1) Use hourly TMY3 data to precisely represent annual PUE and operating costs. (2) Display the model results across the range of data center population.

4. Verification environment 4.1. The data center energy evaluation tool verification To ensure the accuracy of models and evaluations within the DCeET their elements have to be based and verified on simulation performed with the use of commercial software. As mentioned previous section, commercial building energy simulations are inappropriate for data center analysis. The internal and process loads in data center spaces are typically far greater than any HVAC loads caused by heat transfer through the building envelope. Additionally, the HVAC systems serving high tech industrial spaces are often not easily modeled in software programs designed for commercial spaces. As a result, the energy calculations for high tech industrial facilities typically ignore envelope loads. In these cases, envelope loads can be ignored in high tech energy models built with custom software packages or commercially-available software. This is the concerns about effective energy calculation method, not reliability of simulation results. This means that it need not to take a considerable amount of time to make a simulation model as it consists of complex input procedure, such as the building envelope load. So, verification works regarding the reliability of the DCeET is required to compare their results to commercially-available software. This process is partially performed parallel to the program development as well as the assessment of impact of each element. Additionally, to verify the energy performance of the DCeET, results were already compared with the simulation executed through the commercial energy modeling software. To prove the correctness of the calculations and methodologies, the DCeET was verified by comparing with TRNSYS. TRNSYS is a building simulation program (BSP) which has capabilities to analyze complex cooling system applications. First, the building energy demand (cooling load) was simulated, and used for the initial cooling load profile for analyzing the cooling system. In previous research, we (Cho et al. [9]) analyzed the major two cooling system application’s energy savings impact of data centers in temperate/subtropical climate using commercial energy modeling program (TRNSYS). The cooling energy consumption based in this study is used to compare the results of the data center energy evaluation tool. Climate data of Korea is used to compare the baseline system as well as water-side and air-side economizer system’s monthly energy consumption. 4.2. Case study building descriptions

Fig. 14. Energy evaluation process for data centers.

In this study, a prototypical data center has been selected to form a basis for the performance evaluation of a variety of cooling system applications. Heat gain from ICT equipment and other load factors reflecting the data center’s operation characteristics and the annual cooling load was derived. In order to perform a modeling of the case study data center, the TRN Build module of the TRNSYS 16 software was used. In order to estimate the server heat generation to be used in the energy simulation, the number of IT servers that can be installed according to the standard server placement method is first calculated. Fig. 16 shows a typical floor server room layout and a case study data center. The data center has 9 floors with an IT server room area of 2250 m2 which is located on the center. The

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

367

Fig. 15. An example data center energy evaluation tool structure (screenshots).

total heat generation of the IT servers and the heat load per unit space for a typical floor is 2880 kW and 1280 W/m2 , respectively. Indoor design condition is set to DB 22 ◦ C and RH 50% in cooling mode. The major boundary conditions are listed in Table 4. 4.3. Simulation results The annual cooling load was analyzed by applying Seoul’s weather file. From the simulation results, over 95% of the total annual cooling load was analyzed to be consumed by the ICT equipment load. Hence, a constant cooling load occurs throughout the year regardless of the season. As shown in Fig. 17, the annual load distribution of TRNSYS is shown to be in the range of 18,500–23,300 kW depending on the operation rate of the IT

servers. Falling between 79% and 100% of the maximum load, a high load factor is maintained during the year. Since IT servers’ heat generation dominates the cooling load, the result of DCeET has a constant cooling load distribution during the year ignored of building envelope load. For verification, three cooling system applications were compared, and the systems are composed as in Table 5. Cooling system #1 (reference base cooling system) is a typical central chilled water system that is based on the concept of installing chillers in the central mechanical room and supplying the chilled water to the CRAH unit. The system to supply the chilled water is composed of a chilled water pump and a condenser water pump for supplying the condenser water to the cooling tower. System #2 is analyzed as a reference base cooling system with water-side economizer system. A water-side economizer system is typically

368

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

Fig. 16. (a) A case study datacenter, (b) typical floor plan and IT server arrangements (Total 9 floors) and (c) schematic diagram of data center cooling systems.

incorporated into a chilled water cooling system. System #3 is composed with an air-side economizer system which serves as a control mechanism to regulate the use of outside air for cooling in IT server rooms. The major difference of TRNSYS and the DCeET is the building envelope load calculation process. And calculation of complex architectural elements is simplified and focus is on the data center cooling system application. For this reason, some difference occurred in seasonal cooling energy consumption. As shown in Fig. 18, the monthly cooling energy consumption was much the same, except the winter season’s energy use of an air-side economizer system, which showed about 8% difference. Also, the total primary energy consumption difference for cooling system #1 was 1.79%, the difference for system #2 was 3.31%, and the difference for system #3 was 0.55% (Table 6). The reason is that the proportion of cooling load from building envelope (wall, roof, window etc.), solar radiation and heat storage of building mass, which has

one of the most complex mechanisms in energy analysis, is below 5% of the total cooling load, 95% or above is internal heat gain (IT server heat load) with constant rate throughout the year. PUE that is calculated by Eq. (14) is related to ICT cooling energy, UPS power loss, switchgear, electricity grid loss, lighting, and other non-ICT cooling energy as shown in Table 7. However, if cooling energy’s margin of error is small and if other conditions are inputted the same, then PUE results in similar values. Finally, based on the same cooling load profile, the PUE of three cooling system applications were compared. Also, there were minor differences in each component, and the total cooling energy consumption pattern was similar, within a 2% range of error. As mentioned above, the DCeET’s accuracy of the energy consumption is important, but moreover, it is to provide prioritization of energy-impacted cooling system application to the engineer or the designer suitable for the local climate in the early design stage. Ultimately, the goal is to obtain basic data in order to execute energy-optimized data centers.

5. Discussions; the cooling system application impact assessment for energy performance

Fig. 17. Compression of 8760 hourly cooling loads. (One year sequence)

In this section, the DCeET was developed to analyze and evaluate the contribution level in energy performance by each technology, energy cost, system implementation, and equipment applicability. A reference model is the same case study building in Section 4, and the PUE is calculated by the program showed 1.92. Table 8 shows the effect of cooling system and the associated elements

Fig. 18. Comparison of monthly cooling energy demand of system #1–3.

J. Cho et al. / Energy and Buildings 96 (2015) 357–372 Table 4 Simulation boundary conditions (building information). Items

Description

Remark

Location (site) Size (GFA) Typical server room No. of IT servers Design temperature Design RH IT heat load Lighting People Infiltration

Seoul, Korea 65,000 m2 2250 m2 /floor 720 EA/floor 22 ◦ C 50% 1280 W/m2 22 W/m2 102 W/person N/A

– B2/10F Total 9 Floor Total 6480 EA ASHRAE Class1 [19] ASHRAE Class1 [19] 4.0 kW/rack 2.0 W/ft2 Max no. of person Positive pressure

on the total energy of data center through the developed program based on the reference model’s power usage and energy costs set in Seoul. Based on the reference model, 35 items were varied and analyzed; whereas overlapping portions were also considered. The relationship between cooling system’s technical items and energy consumption has either mutual influence or independence, such as fans, pumps, and hydronic thermal transport equipment, or is inter-related by several technologies. The major inter-related technologies are environmental (load) conditions, ICT equipment layout, air-side economizer, water-side economizer, CRAH unit operating conditions, supply air temperature, cooling plant efficiency, and chilled water temperature conditions. Therefore, independent items only need to change the condition of that item, where as inter-related technologies must change the

369

associated conditions and analyze simultaneously. For example, if air-side economizer cycle is to be analyzed and indoor temperature and humidity condition is to be mitigated to increase the period of use, then the CRAH unit’s supply air temperature and outdoor air intake filter type needs to change as well. In addition, chilled water supply temperature is increased if supply air temperature is increased and, in conjunction, COP of chiller is increased as well showing a series of combined effect. If the supply air temperature is greater than 18 ◦ C in consideration to cooling efficiency, the adoption of aisle containment system, which can physically distinguish between cold aisle and hot aisle, is considered to be essential for the analysis. The analysis of energy effectiveness (Seoul basis) in consideration of these overall factors, direct air-side economizer cycle, which includes evaporative cooling, results in cooling energy savings up to approx. 68% and is determined to be the largest factor. Indirect air-side economizer cycle at the approach temperature of 4 ◦ C provides up to 50% and water-side economizer system provides up to 15%. Therefore, cooling plant using outside air provides large impact in cooling efficiency. Next, chiller performance related items, such as chilled water temperature conditions and COP, provides approx. 10% and CRAH unit operation conditions provides approx. 3.4%. Independent items, such as equipment efficiency, inverter, EC motor, provides less than 2% each. Although it may not have been a direct analysis, mitigation of environmental conditions is linked to many factors, hence, can be considered the most effective method in terms of cost and system. However, this is a sensitive issue regarding operation and only may be applied by ICT equipment performance or by the decision of the client.

Table 5 Application components of cooling system #1–3. Equipment

Capacity

Power

Quantity

System #1 (Base)

CRAH units Turbo Chillers Chilled water pumps Cooling towers Condenser water pumps Total

105 kW 1000 USRT 10,000 LPM (800 Pa) 1000 CRT 13,000 LPM (350 Pa)

5.5 kW 575 kW 75 kW 75 kW 110 kW 11,277 kW

360 (36) 9 (1) 9 (1) 9 (1) 9 (1) –

System #2 (System #1 with water-side economizer)

Heat exchanger Chilled water pumps + System #1 (Base) Total

3500 kW 10,000 LPM (800 Pa) – –

– 75 kW 11,227 kW 12,902 kW

9 (1) 9 (1) – –

System #3 (System #1 with air-side economizer)

OAH (airfoil type) EA fan (sirocco type) + System #1 (Base) Total

80,000 CMH (600 Pa) 80,000 CMH (500 Pa) – –

18 kW 18 kW 11,227 kW 11,731 kW

14 14 – –

* Stand-by equipment in parenthesis.

Table 6 Comparison of annual energy demand of cooling system #1–3. Cooling energy demand (MW h/yr) TRNSYS System #1: reference base cooling system System #2: System #1 with water-side economizer system System #3: System #1 with air-side economizer system

DCeET

77,079 64,257 44,515

Difference (%)

75,702 62,132 44,272

1.79 3.31 0.55

Table 7 Comparison of the PUE of cooling system #1–3. IT power (kW h)

System #1 System #2 System #3 A B

Non-IT power (MW h) A

Power losses (MW h) B

IT server

Cooling

Cooling

Non-IT cooling

Lighting, etc.

Switchgear (4%)

103,000 103,000 103,000

77,079 64,257 44,515

75,702 62,132 44,272

380 380 380

440 440 440

3110 3110 3110

Simulation results of TRNSYS. Simulation results of DCeET.

PUE UPS (10%) 10,300 10,300 10,300

TRNSYS

DCeET

1.92 1.81 1.62

1.92 1.82 1.62

370

Table 8 Influence of design variations on data center energy efficiency in the Korea climate. Items

Set conditions DB (◦ C)

RH (%)

A B

Approach temperature: 4 ◦ C. Economizer cycle with low pressure filter (150 Pa).

Containment

kW/m2

Won/m2

20–25

40–55

15–32

20–80

Electricity consumption

16

X

16 13 18 20 22 16 18 20 16

O X O O O X O O X

18 20 22 16 18 20 22 16 18 20 22 16 22

O O O X O O O X O O O X O

PUE

4927 4979 5335 5865 4843 4984 4815 4786 4758 4808 4532 4405 4830 4547 4823 5080 5406 4875 4869 4818 4371 4259 4230 4202 3274 2960 2720 2464 3220 2809 2530 2200 3071 2077

Energy reduction-rate (%)

464,953 469,836 503,432 553,450 456,973 470,272 454,314 451,654 448,994 453,649 427,655 415,689 455,737 429,069 455,098 479,345 510,141 460,034 459,443 454,661 414,643 404,004 401,344 398,684 320,984 296,048 278,774 259,654 332,362 298,096 276,464 249,379 319,597 238,273

1.92 1.93 1.98 2.06 1.91 1.93 1.90 1.90 1.90 1.90 1.86 1.84 1.91 1.86 1.91 1.94 1.99 1.91 1.91 1.90 1.82 1.81 1.81 1.80 1.66 1.61 1.58 1.54 1.62 1.58 1.55 1.50 1.63 1.48 1.86 1.90

100.0 101.1 108.3 119.0 98.3 101.1 97.7 97.1 96.6 97.6 92.0 89.4 98.0 92.3 97.9 103.1 109.7 98.9 98.8 97.8 88.7 86.4 85.9 85.3 66.4 60.1 55.2 50.0 65.3 57.0 51.3 44.7 62.3 42.2

Base – – – -1.7% – – – -3.4% – – −10.6% −2.0% −7.7% −2.1% – – −1.1% −1.2% −2.2% – – – −14.7% – – – −50.0% – – – −65.3% −67.8%

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

Baseline (Central Chilled Water) Cooling type Water-cooled 1 Cooling type Air-cooled (DX) 2 Air-cooled Chiller Cooling type 3 Aisle containment 4 Supply Air(SA) temperature 5 Supply Air(SA) temperature 6 7 Supply Air(SA) temperature Supply Air(SA) temperature 8 7 ◦C Chilled water temp.: 9 Chilled water temp.: 10 ◦ C 10 Chilled water temp.: 12 ◦ C 11 Chilled water temp.: T = 7 ◦ C 12 13 Chiller efficiency COP 5.5 → 6.3 Cooling tower Open circuit 14 15 Cooling tower Forced draft open Cooling tower Forced draft closed 16 Pump Efficiency 70% 17 Fan Efficiency 70% 18 EC Motor Pump/fan 19 Free cooling: Water-side economizerA 20 Free cooling: Water-side economizerA 21 Free cooling: Water-side economizerA 22 23 Free cooling: Water-side economizerA Air-side economizer Indirect air-sideA 24 Air-side economizer Indirect air-sideA 25 Indirect air-sideA Air-side economizer 26 Air-side economizer Indirect air-sideA 27 Air-side economizer Direct air-sideB 28 Direct air-sideB Air-side economizer 29 Air-side economizer Direct air-sideB 30 31 Air-side economizer Direct air-sideB 32 Air-side economizer with evapo. cooling 33 Air-side economizer with evapo. cooling 34 UPS power losses Switchgear power losses 35

SA (◦ C)

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

6. Conclusions In this study, the uniqueness of the data center in consideration of applying energy-saving techniques and quantitative impact of dedicated cooling systems on the energy to implement a data center is the goal. The main objective of this study is defined by energy analysis process, numerical studies, and simulation studies to assess each technical component’s influence in order to create energy-optimized data centers. The developed evaluation tool maximized in reflecting the uniqueness of the data center, where calculation algorithms of small-impact cooling loads are simplified and an element that largely impacts the ICT and cooling energy, are reflected by algorithm in consideration to their inter-relationship. The main results of this study are as follows. (1) The main category of data center dedicated cooling energy efficiency related technical components are divided into air distribution systems, which includes environmental conditions, and cooling plant systems. Three stages, which are prioritized by major cooling technology, standards setting for application, and relationship research by items, by major technologies with a system that can be placed by a total of 10 detailed technologies as well as in conjunction to energy was analyzed. (2) There are many reasons why data centers should be treated differently from other types of occupied buildings. Though performing the energy simulation as proposed is a legitimate approach, it complicates the analysis and subjects the design to more error. Energy simulations for the purpose of making this comparison do not handle data center designs well. A data center energy performance evaluation program (DCeET) was developed in consideration to the maximum uniqueness of data center operating characteristics. The energy consumption difference between the DCeET and the commonly used energy simulation TRNSYS was under 5%. (3) The DCeET maximized in reflecting the uniqueness of the data center, where calculation algorithm of small-impact cooling loads are simplified and elements that largely impacts the IT and cooling energy, are reflected by algorithm in consideration to their inter-relationship. The algorithms that can calculate the contribution to energy between technical components is the key to the development, thus, energy-optimized system is derived in consideration to dedicated cooling systems. (4) The DCeET was developed to analyze and evaluate the contribution level in energy performance by each technology, energy cost, system implementation, and equipment applicability. Based on Seoul, direct air-side economizer cycle with evaporative cooling can save about 68% of energy and is considered as the largest impact item, where indirect air-side economizer system is 50% and water-side economizer system is 15% showing that cooling plant system by using outside air is highly effective. Next, cooling plant equipment efficiency and CRAH unit’s operating condition has about 10% and 3.4%, respectively, reducing effect. In order to build a data center, the selection of dedicated cooling systems in relationship to the location of the facility as to the climate zone must be considered primarily, and then the cost and applicability must be accounted for to determine the class. Also, through consulting with the client (user), IT servers operating conditions as well as the building shape depending on the detailed usages is to be determined. Through the data center energy analysis process and energy performance evaluation tool developed in this study, the significance lies in providing prioritization of energyimpacted cooling system application to the engineer or the designer suitable for the local climate in the early design stage. Ultimately, the goal is to obtain basic data in order to execute energy optimized

371

green data centers. Separate study on CFD analysis in relation to the server room’s air management system and in coordination with a different research on the cooling system energy efficiency from the perspective of total energy needs to be incorporated to evaluate the overall inter-related effect.

References [1] J.G. Koomey, Estimating Total Power Consumption by Servers in the U.S. and the World, 2007. [2] M. Hodes, et al., Energy and Power Conversion: A Telecommunications Hardware Vendor’s Prospective, Power Electronics Industry Group (PEIG) Technology Tutorial and CEO Forum, Cork, Ireland, 2007. [3] J. Cho, S. Shin, J. Lee, Case study and energy impact analysis of cooling technologies as applied to green data centers, J. Archit. Inst. Korea 29 (3) (2013) 327–334. [4] M. Berge, G. Da Costa, A. Kopecki, A. Oleksiak, J.-M. Pierson, T. Piontek, E. Volk, S. Wesner, Modeling and simulation of data center energy-efficiency in CoolEmAll, energy efficient data centers, Lect. Notes Comput. Sci. 7396 (2012) 25–36. [5] J. Cho, J. Yang, W. Park, Evaluation of air distribution system’s airflow performance for cooling energy savings in high-density data centers, Energy Build. 68 (1) (2014) 270–279. [6] Y. Nah, H. Mok, Standardization trends and certification strategy of green IDC for green computing, Telecommun. Rev. 21 (3) (2011) 392–403. [7] J.G. Koomey, Worldwide electricity used in data centers, Environ. Res. Lett. 3 (3) (2008), http://dx.doi.org/10.1088/1748-9326/3/3/034008. [8] S. Greenberg, E. Mills, B. Tschudi, P. Rumsey, B. Myatt, Best practices for data centers: lessons learned from benchmarking 22 data centers, in: Proceedings of the ACEEE Summer Study on Energy Efficiency in Buildings, 2006, pp. 76–87. [9] J. Cho, T. Lim, B.S. Kim, Viability of datacenter cooling systems for energy efficiency in temperate or subtropical regions: case study, Energy Build. 55 (12) (2012) 189–197. [10] Green Grid, Recommendations for Measuring and Reporting Overall Data Center Efficiency, Version 1-Measuring PUE at Dedicated Data Centers, Green Grid, 2010, July. [11] J. Haas, J. Froedge, Usage and public reporting guidelines for the green grid’s infrastructure metrics (PUE/DCiE), in: White Paper #22, The Green Grid, 2009. [12] V. Sorell, J. Sloan, The need for energy modeling software for data centers, in: 2015 ASHRAE winter conference, 2015 January 24–28, USA, 2015. [13] American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, Best Practices for Datacom Facility Energy Efficiency, second ed., American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., 2009. [14] K. Choo, R.M. Galante, M.M. Ohadi, Energy consumption analysis of a mediumsize primary data center in an academic campus, Energy Build. 76 (6) (2014) 414–421. [15] M.P. David, M. Iyengar, P. Parida, R. Simons, M. Schultz, M. Gaynes, R. Schmidt, T. Chainer, Experimental characterization of an energy efficient chiller-less datacenter test facility with warm water cooled servers, in: 28th IEEE Semi-therm Symposium, 2012 March 18–22, USA, 2012. [16] T. Lu, X. Lü, M. Remes, M. Viljanen, Investigation of air management and energy performance in a data center in Finland: case study, Energy Build. 43 (12) (2011) 3360–3372. [17] K. Lee, H. Chen, Analysis of energy saving potential of air-side free cooling for data centers in worldwide climate zones, Energy Build. 64 (9) (2013) 103–112. [18] American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, High Density Data Centers; Case Studies and Best Practices, American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., 2008. [19] American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, Thermal Guidelines for Data Processing Environments, American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., 2008. [20] American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, Particulate and Gaseous Contamination in Datacom Environments, American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., 2009. [21] American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, Green Tips for Data Centers, American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., 2011. [22] Alger Douglas, The Art of the Data Center: A Look Inside the World’s Most Innovative and Compelling Computing Environments, Prentice Hall, Westford, MA, 2012. [23] J. Cho, C. Jeong, B.S. Kim, Study on load-profiles and energy consumption for the optimal IT environment control in the (internet) data center, J. Archit. Inst. Korea 23 (2) (2007) 209–216. [24] Integral Group, Energy Efficiency Baselines for Data Centers: Statewide Customized New Construction and Customized Retrofit Incentive Programs, Integral Group, Oakland, CA, 2013. [25] N. Rasmussen, Guidelines for specification of data center power density, in: APC White Paper #120, 2011.

372

J. Cho et al. / Energy and Buildings 96 (2015) 357–372

[26] J. Spitaels, Dynamic power variations in data centers and network rooms, in: Schneider-Electric White Paper 43, 2011. [27] D.L. Beaty, Internal IT load profile variability, ASHRAE J. 55 (2) (2013) 72–74. [28] S. Ham, M. Kim, B. Choi, J. Jeong, Energy saving potential of various air-side economizers in a modular data center, Appl. Energy 138 (2015) 258–275. [29] American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., TC 9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment, Datacom Equipment Power Trends and Cooling Applications, American Society of Heating, Refrigerating and Air-Conditioning Engineers Inc., 2012.

[30] Emerson Network Power, Energy logic: reducing data center energy consumption by creating savings that cascade across systems, in: A White Paper from the Experts in Business-Critical Continuity, Emerson Network Power, 2008. [31] S. Pelley, D. Meisner, T. Wenisch, J. VanGilder, Understanding and abstracting total data center power, in: Proceedings of the 2009 Workshop on Energy Efficient Design (WEED), 2009. [32] H.J. Sauer, R.H. Howell, Principles of Heating, Ventilating and Air-Conditioning, American Society of Heating Refrigerating and Air-Conditioning Engineers, 2010, ISBN-10: 1933742690.