1、AN-04-9-1 Evolution of Data Center Environ men ta I G u i de I i n es Roger R. Schmidt, Ph.D. Christian Belady Alan Classen Tom Davidson Member ASHRAE Associate Member ASHRAE Member ASHRAE Magnus Herrlin Shlomo Novotny Rebecca Perry Member ASHRAE ABSTRACT Recent trends toward increased equipmentpowe
2、r density in data centers can result in signijkant thermal stress, with the undesirable side efects of decreased equipment availability, wasted floor space, and ineficient cooling system operation. In response to these concerns, manufacturers identified the need to provide standardization across the
3、 industry, and in 1998 a Thermal Management Consortium was formed. This was followed in 2002 by the creation of a new ASHRAE Tech- nical Group to help bridge the gap between equipment manu- facturers and facility designers and operators. “Thermal Guidelines for Data Processing Environments ” thefirs
4、tpubli- cation of TC9.9, is discussed in this papel; along with a histor- ical perspective leading up to the publication and discussion of issues that will define the roadmap for future ASHME activ- ities in thisfield. CURRENT INDUSTRY TRENDSIPROBLEMSIISSUES Over the years, computer performance has
5、significantly increased but unfortunately with the undesirable side effect of higher power. Figure 1 shows the Nationalhternational Tech- nology Roadmap for Semiconductors projection for proces- sor chip power. Note that between the years 2000 and 2005 the total power ofthe chip is expected to incre
6、ase 60% and the heat flux will more than double during this same period. This is only part of the total power dissipation, which increases geometrically. The new system designs, which include very efficient interconnects and high-performance data-bus design, create a significant increase in memory a
7、nd other device utili- zation, thus dramatically exceeding power dissipation expec- tations. As a result, significantly more emphasis has been placed on the cooling designs and power delivery methods within electronic systems over the past year. In addition, the new trend of low-end and high-end sys
8、tem miniaturization, dense packing within racks, and the increase in power needed for power conversion on system boards have caused an order of magnitude rack power increase. Similarly, this miniaturization and increase in power of electronics scales into the data center environment, In fact, it was
9、nt until recently that the industry has publicly recognized that the increasing density within the data center may have profound impact on the reliability and performance of the equipment it houses in the future. For this reason, there has been a recent flurry of papers addressing the need for new r
10、oom cooling 200 I -,- - - * Y” 1995 2000 2005 2010 201s Year Figure 1 Projection of processor power by the National/ International Technology Roadmap for Semiconductors. - Roger Schmidt and Alan Claassen are with IBM Corp., San Jose, Calif. Christian Belady is with Hewlett-Packard, Richardson, Tex.
11、Tom Davidson is with DLB Associates, Ocean N.J. Magnus Herrlin is a telecom consultant at ANCIS Professional Services, San Francisco, Calif. Shlomo Novotny and Rebecca Perry are with Sun Microsystems, San Diego, Calif. 02004 ASHRAE. 559 Year of Announcement Figure 2 Equipment power projection (Uptim
12、e Institute). technologies as well as modeling and testing techniques within the data center. All of these recognize that the status quo will no longer be adequate in the future. So what are the result- ing problems in the data center? Although there are many, the following list discusses some of th
13、e more relevant problems: 1. 2. 3. 560 Power density is projected to go up Figure 2 shows how rapidly machine power density is expected to increase in the next decade. Based on this figure it can easily be projected that by the year 2010 server power densities will be on the order of 20,000 WI m2. T
14、his exceeds what todays room cooling infrastruc- ture can handle. Rapidly changing business demands Rapidly changing business demands are forcing IT managers to deploy equipment quickly. Their goal is to roll equipment in and power on equipment immediately. This means that there will be zero time fo
15、r site prepara- tion, which implies predictable system requirements (i.e., “plug and play” servers). Infrastructure costs are rising The cost of the data center infrastructure is rising rapidly with current costs in excess of about $1000/ft2. For this reason, IT and facility managers want to obtain
16、the most from their data center and maximize the utilization of their infrastructure. Unfortunately, there are many barri- ers to achieve this. First, airflow in the data center is often completely ad hoc. In the past, manufacturers of servers have not paid much attention to where the exhaust and in
17、lets are in their equip- ment. This has resulted in situations where one server may exhaust hot air into the inlet of another server (some- times in the same rack). In these cases, the data center needs to be overcooled to compensate for this ineffi- ciency. In addition, a review of the industry sho
18、ws that the envi- ronmental requirements of most servers from various manufacturers are all different, yet they all coexist in the same environment. As a result, the capacity of the data center needs to be designed for the worst-case server with the tightest requirements. Once again, the data center
19、 needs to be overcooled to maintain a problematic server within its operating range. Finally, data center managers want to install as many serv- ers as possible into their facility to get as much production as possible per square foot. In order to do this they need to optimize their layout in a way
20、that provides the maxi- mum density for their infrastructure. The above cases illustrate situations that require over- capacity to compensate for inefficiencies. There is no NEBS equivalent specification for data centers. (NEBS metwork Equipment-Building Systems is the telecommunication industrys mo
21、st adhered to set of phys- ical, environmental, and electrical standards and require- ments for a central office of a local exchange carrier.)IT/ facility managers have no common specification that drives them to speak the same language and design to a common interface document. The purpose of this
22、paper is to review what started as a “grassroots” indusmide effort that tried to address the above problems and later evolved into an ASHRAE Technical Committee. This committee then developed “Thermal Guide- lines for Data Processing Environments” (ASHRAE 2003a), which will be reviewed in this paper
23、. HISTORY OF INDUSTRY SPECIFICATIONS 4. Manufacturers Environmental Specifications In the late 1970s and early 1980s, data center site planning consisted mainly of determining if power was clean (not connected to the elevator or coffee pot), had an isolated ground, and if it would be uninterrupted s
24、hould the facility experience a main power failure. The technology of the power to the equipment was considered the problem to be solved, not the power density. Other issues concerned the types of plugs, which vaned widely for some of the larger computers. In some cases, cooling was considered a pro
25、blem and, in some isolated cases, it was addressed in a totally different manner, so that the technology and architecture of the machine were dictated by the cooling methodology. Cray Research, for example, utilized a liquid-cooling methodology that forced a completely different paradigm for install
26、ation and servicing. Large cooling towers, which were located next to the main system, became the hallmark of computing prowess. However, typically the preferred cooling methods were simply bigger, noisier fans. The problem here was the noise and the hot-air recirculation when a system was placed to
27、o close to a wall. Over the last ten years, the type of site planning informa- tion provided has varied depending on the companys main product line. For companies with smaller personal computers or workstations, environmental specifications were much like those of an appliance: not much more than wh
28、at is on the back of a blender. For large systems, worksheets for performing ASHRAE Transactions: Symposia calculations have been provided, as the different configura- tions had a tremendous variation in power and cooling require- ments. In certain countries, power costs were a huge contributor to t
29、otal cost of ownership. Therefore, granularity, the ability to provide only the amount of power required for a given configuration, became key for large systems. In the late 1990s, power density became more of an issue and simple calculations could no longer ensure adequate cool- ing. Although cooli
30、ng is a factor of power, this does not provide the important details of the air flow pattern and how the heat would be removed from the equipment. However, this information was vitally needed in equipment site planning guides. This led to the need for additional information, such as flow resistance,
31、 pressure drop, and velocity, to be available not only in the design stages of the equipment but after the release ofthe equipment for integration and support. This evolved into the addition of more complex calculations of the power spec- ifications, plus the airflow rates and locations, and improve
32、d system layouts for increased cooling capacity. In the early 2000s, power densities continued to increase as projected. Layout, based on basic assumptions, could not possibly create the efficiencies in the airflow that were required to combat the chips that were scheduled to be intro- duced in the
33、2004 time frame. Because of this, thermal model- ing, which was typically used to design products, began to be viewed as a possible answer for optimizing cooling in a data center environment. However, without well-designed thermal models from the equipment manufacturers and easy-to-use thermal model
34、ing tools for facility managers, the creation or troubleshooting of a data center environment fell again to traditional tools for basic calculations to build a data center or to gather temperatures after a data center was in use. At this point it became apparent that the solution for rising heat den
35、sities could not be solved after the delivery of a prod- uct. Nor could it be designed out of a product during develop- ment. Instead, it had to be part of the architecture of the entire industrys next-generation product offerings. Formation of Thermal Management Consortium In 1998 a number of equip
36、ment manufacturers decided to form a consortium to address common issues related to ther- mal management of data centers and telecommunications rooms. Initial interest was expressed from the following companies: Amdahl, Cisco Systems, Compaq, Cray, Inc., Dell Computer, EMC, HP, IBM, Intel, Lucent Te
37、chnologies, Motorola, Nokia, Norte1 Networks, Sun Microsystems, and Unisys. They formed the Thermal Management Consortium for Data Centers and Telecommunications Rooms. Since the industry was facing increasing power trends, it was decided that the first priority was to develop and then publish a tre
38、nd chart on power density of the industrys equipment that would aid customers in planning data centers for the future. Figure 2 shows the chart that resulted from this effort. This chart has been widely referenced and was published in collaboration with the Uptime Institute in 2000. Following this p
39、ublication the consortium formed three subgroups to address what customers felt was needed to align the industry: A. Rack airflow directionrack chilled airflow require- ments Reporting of accurate equipment heat loads B. C. Common environmental specifications The three subgroups worked on the develo
40、pment of guidelines to address these issues until an ASHRAE commit- tee was formed in 2002 that continued this effort. The result of these efforts is “Thermal Guidelines for Data Processing Envi- ronments” (ASHRAE 2003a), which is being published by ASHRAE. The objectives of the ASHRAE committee are
41、 to develop consensus documents that will provide environmental trends for the industry and guidance in planning for future data centers as they relate to environmental issues. Formation of ASHRAE Group The responsible committee for data center cooling within ASHRAE has historically been TC9.2, Indu
42、strial Air Condi- tioning. The 2003 ASHRAE Handbook-HVAC Applications, Chapter 17, “Data Processing and Electronic Office Areas” (ASHRAE 2003b) has been the primary venue within ASHRAE for providing this information to the HVAC indus- try. There is also Standard 127-2001, Method of Testing for Ratin
43、g Computer and Data Processing Room Unitary Air- Conditioners (ASHRAE 200 i), which has application to data center environments. Since TC9.2 encompasses a very broad range of industrial air-conditioning environments, ASHRAE was approached in January 2002 with the concept of creating an independent c
44、ommittee to specifically address high-density electronic heat loads. The proposal was accepted by ASHRAE, and TG9.HDEC, High Density Electronic Equipment Facility Cooling, was created. TG9.HDECs organizational meeting was held at the ASHRAE Annual Meeting in June 2002 (Hawaii). TG9.HDEC has since fu
45、rther evolved and is now TC9.9, Mission Critical Facilities, Technology Spaces, and Electronic Equipment. The first priority of TC9.9 was to create a thermal guide- lines document that would help to align the designs of equip- ment manufacturers and aid data center facility designers to create effic
46、ient and fault tolerant operation within the data center. The resulting document, “Thermal Guidelines for Data Processing Environments,” was completed in a draft version on June 2, 2003. It was subsequently reviewed by several dozen companies representing computer manufacturers, facil- ities design
47、consultants, and facility managers. Approval to submit the document to ASHRAEs Special Publications Section was made by TC9.9 on June 30,2003, and the docu- ment publication is expected in December 2003. ASH RAE Transactions: Symposia 56 1 TC9.9 ENVIRONMENTAL GUIDELINES Environmental Specifications
48、For data centers, the primary thermal management focus is on the assurance that the housed equipments temperature and humidity requirements are being met. As an example, one large computer manufacturer has a 42U rack with front-to-rear air cooling and requires that the inlet air temperature into the
49、 front of the rack be maintained between 10C and 32C for elevations up to 1287 m (4250 feet). Higher elevations require a derating of the maximum dry-bulb temperature of 1C for every 218 m (720 feet) above 1287 m (4250 feet) up to 3028 m (10000 feet). These temperature requirements are to be maintained over the entire front of the 2 m height of the rack where air is drawn into the system. These requirements can be a challenge for customers of such equipment, especially when there may be many of these racks arranged in close proximity to each other and each dissipa
copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1