Strategies for effective use of exergy-based modeling of data center thermal management systems

Strategies for effective use of exergy-based modeling of data center thermal management systems

ARTICLE IN PRESS Microelectronics Journal 39 (2008) 1023–1029 www.elsevier.com/locate/mejo Strategies for effective use of exergy-based modeling of ...

532KB Sizes 0 Downloads 16 Views

ARTICLE IN PRESS

Microelectronics Journal 39 (2008) 1023–1029 www.elsevier.com/locate/mejo

Strategies for effective use of exergy-based modeling of data center thermal management systems Sara McAllistera,, Van P. Careya, Amip Shaha, Cullen Bashb, Chandrakant Patelb a

Department of Mechanical Engineering, University of California, Berkeley, 60A Hesse Hall, Berkeley, CA 94720, USA b Hewlett Packard Laboratories, Data Center Architecture Group, 1501 Page Mill Road, Palo Alto, CA 94304, USA Received 2 April 2007; received in revised form 3 November 2007; accepted 13 November 2007 Available online 2 January 2008

Abstract As power densities in data centers quickly increase, the inefficiencies of yesterday are becoming costly data center thermal management problems today. One proposed method to address the inefficiencies of state-of-the-art data centers is to use the concept of exergy. To this end, earlier investigations have used a finite-volume, uniform-flow computer model to analyze exergy destruction as a means of identifying inefficiencies. For this type of exergy-based program to be a useful engineering tool, it should: (i) be easy to set up, viz. establish grid size and impose system parameters; (ii) have a formulation that is solvable and numerically stable; (iii) be executable in reasonable time on a workstation machine with typical processor speed and memory; and (iv) model the physics with acceptable accuracy. This investigation explored specific strategies for achieving these features. This work demonstrates that optimally chosen computational strategies do enhance the usefulness of an exergy-based analysis program as an engineering tool for evaluating the thermal performance of a data center. r 2007 Elsevier Ltd. All rights reserved. Keywords: Exergy; Availability; Data centers; Thermal management; Modeling strategy

1. Introduction Average power densities in data centers are rapidly increasing and are expected to reach up to 3000 W/m2 in the next 5 years [1]. At these heat dissipation levels, the inefficiencies of state-of-the-art data centers are intensifying. These inefficiencies include recirculation of warm air into the cold aisle, short-circuiting of the cold air back into the computer room air conditioning (CRAC) unit, lack of information about local conditions which could lead to hot spots, malprovisioning of cooling resources that leads to inefficient system operation, and inefficient workload placement [2,3]. A great deal of work has been done in recent years to address these issues. Traditionally, computational fluid dynamics (CFD) has been used to study the Corresponding author. Tel.: +1 510 643 5282, +1 510 642 7177; fax: +1 510 642 1850. E-mail addresses: [email protected] (S. McAllister), [email protected] (V.P. Carey), [email protected] (A. Shah), [email protected] (C. Bash), [email protected] (C. Patel).

0026-2692/$ - see front matter r 2007 Elsevier Ltd. All rights reserved. doi:10.1016/j.mejo.2007.11.005

under-floor plenum, the airspace in the data center, and even the racks themselves. Parameters such as plenum height [4], perforated tile open area [5], and the spacing between CRAC units and vent tiles [6] have been optimized. The airspace in the data center has been modeled to compare different data center layouts [7,8]; to determine placement of racks and CRAC units to properly ‘‘provision’’ them [3]; to predict the results of a CRAC failure [9]; and to study the effects of parameters such as room height [10], rack removal [11], rack flow rate [12], and placement of high-powered racks [13]. CFD has even been used for the rack itself to determine a robust rack design [14]. Dimensionless parameters have been developed to aid in the evaluation of a data center’s operating state [15,16]. Another proposed method to address the inefficiencies of state-of-the-art data centers is to use the concept of exergy [17–20]. Because irreversibilites destroy exergy (or available energy) (see Appendix for further discussion), exergy-based analysis can be used to locate local inefficiencies and to optimize the entire data center by minimizing the total exergy destroyed. To this end, a

ARTICLE IN PRESS 1024

S. McAllister et al. / Microelectronics Journal 39 (2008) 1023–1029

finite-volume, uniform-flow computer model has been previously developed [17,18]. In earlier work, a code of this type has been validated by comparing the results to experimental data taken from a real data center with generally good agreement [18]. However, previous studies focused primarily on the feasibility of this type of analysis and did not consider how to best use this type of computational tool. Specifically, the effect of cell size was discussed but no clear guidelines were given. In addition, it was mentioned that other operating parameters, such as the rack heat load, supply air temperature, and the supply airflow rate, could effect the accuracy of the code but the extent was not explored. It is the goal of this work to examine these questions in detail and explore strategies to make this exergy-based program a useful engineering tool. For an exergy analysis program to be a useful engineering tool, it should have the following qualities: (i) be easy to set up, viz. establish grid size and impose system parameters; (ii) have a formulation that is solvable and numerically stable; (iii) be executable in reasonable time on a workstation machine with typical processor speed and memory; and (iv) model the physics with acceptable accuracy. It is clear that (iii) and (iv) above are not independent and are inversely related so that a compromise between the two will have to be reached. This paper explores how to achieve these goals in the context of a specific data center design operating under various conditions. 2. Methods The methods used to achieve the goals listed above will be explored separately: (i) Previous exergy analysis schemes [18] relied on the user to manually count the cells in the data center, determine which cells correspond to the various components, and enter these into the program. To achieve the first required attribute listed above, a scheme was created to simplify the establishment of the grid size and the system parameters, such as the layout of the data center, the supply air temperature, load in each rack, etc. A separate input file was created where the user can enter the dimensions and locations of the components within the data center. The locations are specified using Cartesian coordinates with respect to a reference point and are entered into an array. A graphic display of the layout is provided so that the user can double check the entered layout. The program based on this scheme then calculates the cell numbers for each component given the cell dimensions provided. (ii) In order to calculate the exergy destroyed at each point in the data center, the temperatures at each point must

Table 1 Parameter levels High rack heat load (W/m2), (W/ft2) Low rack heat load (W/m2), (W/ft2) High flow rate through each tile (kg/s) Low flow rate through each tile (kg/s) High supply air temperature (K) Low supply air temperature (K) Large cell size (mmm) (tot. # of cells) Medium large cell size (mmm) (tot. # of cells) Medium small cell size (mmm) (tot. # of cells) Small cell size (mmm) (tot. # of cells)

3583, 333 717, 67 0.589 0.267 297 289 0.60.60.6 0.60.60.3 0.30.30.6 0.30.30.3

(240) (480) (960) (1920)

be calculated first. Energy balances on each finite volume result in a large number of simultaneous equations that are solved by an iterative method. To enhanced the stability and converge of this method, we implemented more extensive tracking of local temperatures during this iterative scheme to verify the convergence of the solution for a variety of test cases (see Table 1 for system parameters used). (iii) Two strategies were employed to speed up the computations. The first was to choose the minimum number of iterations to perform based on the convergence information gained in the last step. By doing this, some accuracy may be lost, but, as mentioned previously, a compromise between time and accuracy must be reached. The second strategy was to make slight modifications to the method used to solve the large number of simultaneous equations for the temperatures in each cell. Within the iterative scheme to solve these equations, determinant checks were performed several times within each loop as a criterion to stop the loop. Because the code was going to be stopped prematurely after the minimum number of iterations, most of these determinant checks become unnecessary. By removing some of these checks, the computation time is dramatically reduced without affecting the results. (iv) To evaluate the accuracy of the model, exergy destruction results were compared to results obtained using the Flovent CFD code [21]. During the course of this study, it was clear that the maximum attainable accuracy can be limited due to the memory restrictions on a typical workstation machine.

3. System description The layout of the sample data center used in this investigation is shown in Fig. 1. This raised-floor data center consists of one rack row with eight racks, eight 0.6 m  0.6 m (2 ft  2 ft) vent tiles, and two CRAC units. The ceiling height is 2.4 m (8 ft) measured from the floor, with an additional 0.3 m (1 ft) of under-floor plenum depth. For simplicity, only the area above the floor and below the ceiling is modeled. Only the CRAC return mechanism was

ARTICLE IN PRESS S. McAllister et al. / Microelectronics Journal 39 (2008) 1023–1029

1025

Table 2 Experimental schedule Trial no.

1 2 3 4

Factor Rack load

Air flow rate

Supply air temperature

Low Low High High

Low High Low High

Low High High Low

Each experimental trial is repeated for each cell sizes examined in Table 1.

Fig. 1. Data center layout as seen from above. Each tile represents an area of 0.6 m  0.6 m. The ceiling is 2.4 m and there is an under-floor plenum that extends 0.3 m below the floor. The racks are 1.8 m tall.

optimal. Values for rack heat load, tile flow rate, and supply air temperature values were chosen to represent typical extreme operating conditions in state-of-the-art data centers. 4. CFD validation

considered. The CRAC return area begins 1.8 m (6 ft) above the floor and extends to the ceiling. The vent tiles were considered undamped, with minimal flow restrictions. In this parametric computational study, a very simplistic treatment of the flow field was used. The flow into the data center was treated as being evenly distributed, with each vent tile having the same flow rate. This assumption was validated by solving for the tile flow numerically using CFD. The rack height was taken to be 1.8 m (6 ft). The heat generation in the racks is assumed uniform throughout the rack. In this study, four different cell sizes were used (see Table 1). It should be noted, due to restrictions within the code that for the smaller cell sizes used, the eight 0.6 m  0.6 m vent tiles were replaced with sixteen 0.3 m  0.3 m (1 ft  1 ft) vent tiles such that the magnitude of the flow into the data center was the same. To track the convergence of the solution, 40 points were examined in the data center: the inlet and exhaust of each rack at a height of 1.2 m (4 ft) and 1.8 m (6 ft) and every 0.6 m (2 ft) along the CRAC unit return area at a height of 2.4 m (8 ft). The number and resolution of the points in the validation grid were chosen to represent sufficiently diverse flow and temperature conditions across key areas of thermofluidic activity in the data center. In addition to cell size, previous work by Shah et al. [18] has suggested that there are several additional factors which could affect the accuracy of the code: airflow rate, supply air temperature, and rack heat dissipation. To include the possible effect of these parameters, the Taguchi method of experiment design [22] was used to devise an experiment schedule. Table 2 shows the L4 standard orthogonal array used at each cell size to vary these parameters. Table 1 shows the values these parameters were given. The smallest cell size was chosen based on the restrictions imposed by memory requirements of a typical workstation, while the largest cell size was chosen to represent a test case where memory would not be constrained but the accuracy of the physics may be sub-

The results of the exergy code were validated by comparison to results obtained using commercial CFD software Flovent from Flomerics [21]. A grid independent solution was constructed with 262,570 grid cells. Turbulent flow was modeled using the LVEL K-Epsilon model. The boundaries of the computational domain were modeled as adiabatic with no penetration and no slip. The flow rates through the racks were chosen such that there was a 15 K temperature rise across the racks at the given heat load. CRACs were modeled as fixed flow devices with defined volumetric flow rates and supply air temperatures according to Table 1. The flow within the under-floor plenum was solved numerically, but it was verified that the plenum pressure distribution led to variation in vent tile flow of less than 5% in accordance with the assumption of uniform flow. The temperature results were found by resolving the area in front of the racks into four regions. The same was done behind the racks. At each CRAC return, the average temperatures were found using volume regions. Because there is no way to calculate the exergy destruction at the grid level using commercially available CFD, the exergy destroyed in the airspace is calculated indirectly by performing an exergy balance on the data center as suggested by Shah et al. [18]. For a fixed control volume taken along the inside wall of a steady-state data center, the exergy balance becomes: 0 ¼ Cair; in  Cair; out  Cdestroyed; airspace  Cdestroyed; racks . Plugging in the appropriate expressions (see Appendix) and solving for the exergy destroyed in the airspace:  _ ðhin  hout Þ  T 0 ðsin  sout Þ Cdestroyed; airspace ¼ m  2     vin  v2out T0 _ þ gðzin  zout Þ þ 1  Q, þ 2 Tp

where Tp is the temperature at which heat production occurs and was chosen to be a constant 343 K. Ignoring

ARTICLE IN PRESS 1026

S. McAllister et al. / Microelectronics Journal 39 (2008) 1023–1029

Fig. 2. Temperature convergence for the case of high rack heat load, high tile flow rate, low supply temperature for 1920 cells. The different symbols represent different cell locations in the data center.

pressure and velocity effects and simplifying the expressions for enthalpy and entropy of an ideal gas,     T in _ cp ðT in  T out Þ  T 0 cp ln Cdestroyed; airspace ¼ m T out   T0 _ þ 1 Q. Tp This calculation then is only based on supply air temperatures, the average return air temperature from the CFD results, and the heat dissipated in the data center. 5. Results As mentioned, the requirement that the exergy-based program be solvable and numerically stable was verified by tracking the convergence of the temperatures during the iterative solution. During this analysis, it was found that the results at the points of interest converge before the iterative scheme for the entire data center reaches a robust solution. Fig. 2 shows a typical example of this convergence. From this figure, one sees that the results converge rather quickly—only about 40 iterations are needed. It should be noted that the results shown in this figure are the temperatures at each face of each cell. The final reported value at each location then is the average value of each of the six faces of the cell. In the interest of satisfying the requirement that the program be executable in reasonable time on a typical workstation, the results of this convergence analysis were used to make recommendations on the minimum number of iterations allowed. Table 3 shows these recommendations for each cell size tested for this example data center. Choosing a fewer number of iterations can reduce the run time further, but such a run-time reduction will significantly affect the robustness and accuracy of the solution.

Table 3 Minimum no. of iterations for example data center No. of cells

Recommended no. of iterations

240 480 960 1920

80 80 40 40

Instead, the recommended number of iterations in Table 3 ensures, at a minimum, that the solutions obtained in the key data center areas (which have the most thermofluidic variability, and therefore the largest volatility in the numerical solution) will have stabilized regardless of the data center operating conditions considered. The remaining points in the grid may not have reached convergence at this iterative stage, but the change in temperature and flow during successive iterations of these outlying cells is found to have negligible (o2%) impact on the overall system exergy loss. As a result, minimal gain is realized by incurring the additional computational expense of iterating until the full system has reached a robust solution, which leads to the recommended compromise of Table 3. Additionally, this also effectively reduces the system size which must be solved, thus allowing a burdened reduction in the number of mathematical iterations required by the code. Combined, these changes decrease the run time by over 95%, so that run times for these simulations ranged from only around 5 s to just under 12 min on a 1.86 GHz Hewlett-Packard laptop with 512 MB of RAM. In comparison, the CFD runs took approximately 2 h each. It should be noted that the recommendations made here are specific to the data center layout and operating parameters considered; the number of iterations required to accurately model another data center may be different. If another data

ARTICLE IN PRESS S. McAllister et al. / Microelectronics Journal 39 (2008) 1023–1029

1027

Fig. 3. Total exergy destruction in the airspace for cell size tested compared to CFD results (hatched bars). The CFD results were found using an exergy balance on the entire data center using the calculated flow and temperature field. Rack heat generation (Qgen), vent tile airflow rate (mflow), and supply air temperature (ST) were varied.

center is to be modeled, the methodology presented here should be applied again to determine the appropriate number of iterations to run. The data center used here is meant as an example so that future users can replicate the method. The last requirement that this exergy-based program be reasonably accurate was tested by comparing the results to the commercially available CFD program, Flovent. As mentioned, there is currently no way to directly calculate the exergy destruction using commercially available CFD. Because the calculated values for the CFD results are based on a rudimentary analysis using limited data, it is difficult to discern the accuracy of the exergy model with great precision. It is important to note that the results from the exergy code are also approximations based on a very simple flow model. The errors in the exergy calculations will be affected by any errors in the mass flow rate and temperature calculations. Therefore, in addition to smaller grid size, it is possible that a more sophisticated flow model, such as a k–e model, may provide a better estimate of the exergy destruction. Both options will require substantially more run time, and are a subject of future research. Despite the approximate nature of the analysis, the results appear to qualitatively reflect the expected trends (see Fig. 3). For example, high heat generation levels (Qgen) and high flow rates (mflow) result in higher exergy destruction in the airspace. Because no useful work is done with the heat generated in the data center, higher heat generation results in more exergy destroyed. In addition, the rack inlet–outlet temperature difference will be larger at the higher heat loads, so that any recirculation in the cold

aisle should have a higher exergy loss for high Qgen values. Similarly, higher mass flow rates in the data center result in higher exergy destruction because there is a higher volume flow rate of air that is potentially mixing. The results of Fig. 3 clearly reflect these expected trends. Another key finding from Fig. 3 is that the results of the exergy code are comparable to those of the CFD model, but were achieved in a small fraction of the time. Because of this, it is now feasible to add this type of analysis into a real-time control system to optimize a data center’s performance. When the heat generation level and flow rates are low, changing the cell dimensions has little effect on the exergy destruction results, indicating that the exergy code can sufficiently model the physics at even the coarsest of granularities. However, in the high flow rate cases, it is clear that the largest cell size does not accurately model the thermofluidics. The complexity of the mass flow and temperature fields requires a finer granularity. This perhaps is why shrinking the cell dimensions when the exergy destruction is high seems to have a greater effect that when the exergy destruction is low. 6. Conclusions Power densities in data centers are quickly rising and in the process, the inefficiencies of state-of-the-art data centers are only getting worse. One proposed method to address this issue is to use the concept of exergy. To this end, a finite-volume, uniform-flow computer model has been previously developed. This paper explores how to best use this new computational tool by identifying four key attributes and providing strategies to attain them. The four

ARTICLE IN PRESS 1028

S. McAllister et al. / Microelectronics Journal 39 (2008) 1023–1029

key attributes identified were that the program be: (i) easy to set up, (ii) solvable and numerically stable, (iii) executable in a short time, and (iv) accurate. In order to achieve (i), a scheme was developed that would allow the user to input locations and dimensions of the components instead of manually counting and entering the cell numbers for each component. The convergence of the solution for varying cell sizes, and operating conditions, such as the rack heat load, supply air temperature, and the airflow rate, was more precisely assessed by more extensive tracking of local temperatures during the iterative solution. It was found that the temperatures in the key areas of the data center converged in about 40–80 iterations, well before the iterative scheme self-terminated. The results converged more quickly for the larger cell sizes and were not affected by the operating conditions. Because the temperature results converged before the iterative scheme self-terminated, (iii) was partially addressed by prematurely terminating the loop after a minimum number of iterations. In addition to removing some unnecessary calculations, streamlining the temperature calculations resulted in a 95% reduction in computation time with only a 2% change in the final exergy field results. The accuracy of the code was verified by comparison to results from a CFD analysis. Because the exergy results from the Flovent CFD model are based on a ‘‘global’’ analysis instead of a ‘‘local’’ analysis, it is impossible to make valid quantitative comparisons, but several clear trends were seen in the overall exergy destruction in the data center. When there are large gradients with the data center, such as those that occur with large rack heat generation or flow rates, the largest cell size used in this study was inadequate and the decrease in run time does not outweigh the loss in accuracy. However, when the gradients are small, such as when the tile flow rate and rack heat generation are low, the cell size is less important and the largest cell size can be used. On the whole, the data center operating conditions dictate the largest justifiable cell size and consequently the fastest run times.

typically the same as the environment, but can be arbitrarily chosen. The concept of exergy is closely related to the second law of thermodynamics. The Clausius statement of the second law implies that there is a theoretical limit to the work output of an engine operating between a heat sink and a heat source. The maximum efficiency that a heat engine can achieve is given by the Carnot efficiency:

Acknowledgments

c ¼ ðh  h0 Þ  T 0 ðs  s0 Þ þ 12v2 þ gz,

This research was supported by a Microelectronics Innovation and Computer Research Opportunities (MICRO) grant from the University of California (grant no. 04-014). Additional funding was provided by a generous gift from the Hewlett-Packard Company through the Center for Information Technology Research in the Interest of Society (CITRIS) at the University of California.

where h is enthalpy, s is entropy, v is velocity, z is elevation, and the subscript 0 indicates the ground state. The two forms of exergy transfer that are important in a data center are the transfer by mass and heat which are given, respectively, by

Appendix Exergy is defined as ‘‘the useful work potential of a given amount of energy at some specified state,’’ as is also called ‘‘availability’’ or ‘‘available energy’’ [23]. It is defined relative to a reference state so that the exergy at the reference state is zero. The reference, or ground, state is

Zth ¼ 1 

T sink . T source

For example, the maximum amount of work that can be done by 100 J of energy at 500 K with respect to a sink at 273 K is   _ max ¼ Zth E_ ¼ 1  273  100 J ¼ 45:4 J W 500 Using the definition provided above, this quantity represents the exergy of the system. In comparison, the maximum amount of work that can be done by 100 J of energy at 350 K relative to the same sink is   _ max ¼ Zth E_ ¼ 1  273  100 J ¼ 22 J: W 350 The exergy, or useful work potential, of the first system is more than twice that of the second. It is clear why exergy can also be viewed as a quantitative measure of the ‘‘quality’’ of the energy. There are many ways in which exergy can be destroyed, or, from another perspective, to reduce the ‘‘quality’’ of the energy. Any irreversible process or anything that generates entropy destroys exergy. These include friction, heat transfer through a finite temperature difference, and mixing. These can also be seen as processes where work potential is lost. To perform an exergy analysis on an open system, such as a data center, one would need to include the flow exergy of the streams and the transfer of exergy due to heat, work, and mass. A discussion of these quantities can be found in Ref. [23]. The flow exergy (C) in an open system is given as:

_ ¼ mc, _ C   T0 _ _ Cheat ¼ 1  Q. T References [1] A. Beitelmal, C. Patel, Thermo-fluids provisioning of a high performance high density data center, Distrib Parallel Databases High Density Data Centers 21 (2/3) (2007) 227–238.

ARTICLE IN PRESS S. McAllister et al. / Microelectronics Journal 39 (2008) 1023–1029 [2] C.E. Bash, C.D. Patel, R.K. Sharma, Efficient thermal management of data centers–immediate and long-term research needs, Int. J. HVAC R Res. 9 (2) (2003) 137–152. [3] C. Patel, R. Sharma, C. Bash, A. Beitelmal, Thermal considerations in cooling large scale high computer density data centers, in: Proceedings of ITHERM 2002 the Eighth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, San Diego, CA, pp. 767–776. [4] K. Karki, S. Patankar, A. Radmehr, Techniques for controlling airflow distribution in raised-floor data centers, Adv. Electron. Packag. 2 (2003) 621–628. [5] R. Schmidt, E. Cruz, Cluster of high powered racks within a raised floor computer data center: effect of perforated tile flow distribution on rack inlet air temperatures, J. Electron. Packag., Trans. ASME, Therm. Manag. Electron. Syst.: Four Decades of Progress, December 126(4) (2004) 510–518. [6] J. Rambo, Y. Joshi, Supply air distribution from a single air handling unit in a raised floor plenum data center, in: Proceedings of the ISHMT/ASME’04—Joint Indian Society of Heat and Mass Transfer—American Society of Mechanical Engineers Heat and Mass Transfer Conference, Kalpakkam India, 2004. [7] S. Shrivastava, B. Sammakia. R. Schmidt, M. Iyengar, Comparative analysis of different data center airflow management configurations, in: Proceedings of the International Electronic Packaging Technical Conference and Exhibition, San Francisco, CA, 2005. [8] K. Karki, S. Patankar, Airflow distribution through perforated tiles in raised-floor data centers, Build. Environ. 41 (2006) 734–744. [9] M. Beitelmal, C. Patel, Thermo-fluids provisioning of a high performance high density data center, Distrib. Parallel Databases 21 (2007) 227–238. [10] R. Schmidt, Effect of data center characteristics on data processing equipment inlet temperatures, Adv. Electron. Packag.: Therm. Manag. Reliab. 2 (2001) 1097–1106. [11] R. Schmidt, E. Cruz, Raised floor computer data center: effect on rack inlet temperatures when adjacent racks are removed, Adv. Electron. Packag. 2 (2003) 481–493. [12] R. Schmidt, E. Cruz, Raised floor computer data center: effect on rack inlet temperatures when rack flowrates are reduced, Adv. Electron. Packag. 2 (2003) 495–508. [13] R. Schmidt, E. Cruz, Raised floor computer data center: effect on rack inlet temperatures when high powered racks are situated

[14]

[15]

[16]

[17]

[18]

[19] [20]

[21] [22]

[23]

1029

amongst lower powered racks, Am. Soc. Mech. Eng., EEP, Electron. Photon. Packag., Electr. Syst. Photon. Des. Nanotechnol. 2 (2002) 297–309. N. Rolander, J. Rambo, Y. Joshi, F. Mistree, Robust design of aircooled server cabinets for thermal efficiency, in: Proceedings of the International Electronic Packaging Technical Conference and Exhibition, San Francisco, CA, 2005. R. Sharma, C. Bash, C. Patel, Dimensionless parameters for evaluation of thermal design and performance of large-scale data centers, in: Proceedings of the Eight American Institute of Aeronautics and Astronautics (AIAA)/American Society of Mechanical Engineers Joint Thermophysics and Heat Transfer Conference, St. Louis, 25 June 2002. R. Sharma, C. Bash, C. Patel, M. Beitelmal, Experimental investigation of design and performance of data centers, in: Thermomechanical Phenomena in Electronic Systems—Proceedings of the Intersociety Conference—Ninth Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, 2004, vol. 1, pp. 579–585. A. Shah, V. Carey, C. Bash, C. Patel, Exergy Analysis of Data Center Thermal Management Systems, in: American Society of Mechanical Engineers, Advanced Energy Systems Division (Publication) AES, Proceedings of the ASME Advanced Energy Systems Division, 2003, vol. 43, pp. 437–446. A. Shah, V. Carey, C. Bash, C. Patel, Development and experimental validation of an exergy-based computational tool for data center thermal management, in: Proceedings of the ASME Heat Transfer/ Fluids Engineering Summer Conference, HT/FED 2004, vol. 1, pp. 965–971. A. Shah, V. Carey, C. Bash, C. Patel, Exergy analysis of data center thermal management systems, J. Heat Transfer, 2008, in press. A. Shah, V. Carey, C. Bash, C. Patel, An exergy-based figure of merit for electronic packages, J. Electron. Packag. 128 (4) (2006) 360–369. Flomerics Ltd. Flovent, version 2.1. 81 Bridge Road, Hampton Court, Surrey KT8 9HH, UK, 1999. P. Ross, Taguchi Techniques for Quality Engineering: Loss Function, Orthogonal Experiments, Parameter and Tolerance Design, second ed., McGraw-Hill, New York, 1996. Y. Cengel, M. Boles, Thermodynamics: An Engineering Approach, fourth ed., McGraw-Hill, Boston, 2002.