1、2009 ASHRAE 211ABSTRACT With computer servers exponential growth in power for a 7 rack density from sub-10kW (34,121Btu/hr) of yester-years, to 30kW (102,363Btu/hr) in the last half decade, to current product launches of over 60kW (204,726Btu/hr), there is significant desire and product research by
2、datacenter cool-ing equipment vendors, as well as computer server equipment vendors, to introduce liquid-cooling solutions in various forms, such as direct equipment level or providing air-to-liquid heat-exchanging at the rack. In this paper, we would like to differentiate the equipment for Telecom
3、Central Office (CO) environment to the more industry dominant Datacenter (DC) environment. A holistic examination, from network equipment design to the Telecom CO requirements, is followed in explain-ing the different hurdles along the way in implementing liquid cooling in the Telecom environment.IN
4、TRODUCTIONUnless otherwise specified or discussed, the current refer-ences to air cooling and liquid cooling are directed to such implementations on the equipment design and not at the rack or room level. It is well established that liquid cooling, which includes most of the non-air cooling technolo
5、gies, such as water, 2-phase/multi-phase flow and refrigerant systems, is a much more effective method of extracting heat because, by compar-ison, air is a poor conductor of heat and has low heat capacity. These alternative cooling techniques are not new in electronic cooling, but reinvestigated eve
6、ry decade or so when existing electronics technology reaches a power density plateau that cannot be adequately addressed by air cooling. The most recognized of these past liquid-cooling designs is the IBM Thermal Conduction Module (TCM) (Kraus et al. 1983) of the 80s to the early 90s for cooling Bip
7、olar devices. It was aban-doned when CMOS technology came along that provided continued scaling of higher performance with lower power consumption. However, since then the server vendors brought back liquid cooling to the latest generation of DC servers because the rack power density is surpassing 3
8、0kW (102,363Btu/hr).Air cooling has always been the cooling of choice because of its beneficial attributes that are much more attractive than pure comparison of cooling performance. Some of these attri-butes include low cost, ease of implementation (both in the design and equipment deployment), diel
9、ectric in nature, and no adverse environmental impact. Until there is a drastic para-digm shift in the ultimate heat sinking fluid, such as dumping the waste heat into the ocean (or lake) directly, from a holistic view air is still the ultimate source that the heat is dissipated to and will be conti
10、nued to be dumped into the environment.The industry is once again nearing the power plateau. Strong debates exist between the 2 dipoles of air cooling and liquid cooling of IT equipment and continuum of solutions in between them, because the power density is at the transition boundary. If the power
11、density continues its exponential growth, the cooling sweet spot may be shifted; it will not be a debate further, but will push the industry into liquid cooling because other issues are getting less manageable, such as acoustic noise, air flow distribution, and maintaining proper local component tem
12、peratures.The market for computer and computer server units are larger than the network equipment market. Most of the commercial cooling equipment vendors are much more famil-iar and involved with DC environment than the CO Hurdles in Deploying Liquid Cooling in NEBS Environment Herman ChuHerman Chu
13、 is principal engineer at Cisco Systems, Incorporated, San Jose, CA.LO-09-018 2009, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions 2009, vol. 115, part 2. For personal use only. Additional reproduction, distribution,
14、 or transmission in either print or digital form is not permitted without ASHRAEs prior written permission.212 ASHRAE Transactionsenvironment. In this paper, it is the main purpose to point out the differences and hope this will help vendors to derive solu-tions that will be tailored for CO operator
15、s instead of a 1-size-fits-all approach. Before the industry can embrace liquid cool-ing, there are issues that need to be understood and addressed.MARKET SEGMENTS OVERVIEW FOR NETWORK EQUIPMENTGenerally, network equipment can be categorized into the following market segments:Consumer. Sometimes the
16、se are also referred to as cus-tomer premises equipment (CPE) for use with service provider services. They include equipment such as tele-phones, DSL modems, cable modems, set-top boxes and private branch exchanges.Small office, home office (SOHO), branch, medium office. For SOHO, it is usually from
17、 1 to 10 employees. For branch and medium office, the equipment is typi-cally in a designated area or room in the office, but not in a datacenter.Enterprise. These are the large corporations that typi-cally have DCs with well controlled environments hous-ing the IT and network equipment.Service prov
18、iders (SP). The Telecom companies, such as AT both having a profound impact on how equipment is designed.Availability, Serviceability and Redundancy ConsiderationsSimilar to the past mainframe computer requirement, high-availability is a crucial component of routers and switches deployed in COs. For
19、 instance, in a catastrophic or emergency scenario, users expect that the dial-tone to be instant when the phone is picked up. With high-availability requirement, serviceability and redundancy are key elements in the equipment design to mini-mize any intended or unintended down time. Network equip-m
20、ent has to be easily serviceable and redundancy is generally designed in to the cooling and power systems and at the board level for each type of boards. For the larger size equipment, such as a core router, it is not possible to provide redundancy by duplicating the machine. To utilize liquid cooli
21、ng, these design constraints need to be carefully addressed. As listed in Table 1, per NEBS require-ment, it can take up to 96hrs before any operator intervention. This includes any repair or replacement of the cooling system, power supplies, or boards. This means that any fluid leakage, pump failur
22、es or any other components of the liquid cooling loop need to be self-healed or self-controlled within the 96hr window. Most likely this will require redundant components (such as pump), by-pass circuitry, pneumatic on/off valves and leakage detection. ReliabilityTo illustrate the importance of reli
23、ability lets review Figure 3 again. The slope for the Telecom equipment curve is shallower than the one for the computer equipment. Both industries ride the same semiconductor technology and advances, so why are they different? One of the reasons could be because the Telecom equipment product cycle
24、is generally longer than computer and server product cycle. Once the equipment is deployed in the CO, it can be there for more than 10 years, while for computer and server industry is much shorter. Due to the difference in product life cycle, the network equipment vendors have not been able to proje
25、ct for the power increase trend of the semiconductor, as well as the computer and server vendors.Table 1. GR-63-CORE Environmental LimitsConditions LimitsTemperature Operating Short-term1 Short-term with fan failure 5C to 40C (41F to 104F)-5C to 50C (23F to 122F)-5C to 40C (23F to 104F)Rate of tempe
26、rature change 30C/hr (86F/hr)Relative HumidityOperating Short-term15% to 85%5% to 90%, but not to exceed 0.024kg water/kg of dry air (0.024oz water/oz of dry air)Altitude Operating up to 40C (104F) Operating up to 30C (86F)-60m to 1,800m (-197ft to 5,905ft)1,800m to 4,000m (5,905ft to 13,123ft)Notes
27、:1. Short-term refers to a period of not more than 96 consecutive hours and a total of not more than 15 days in 1 year. (This refers to a total of 360 hours in any given year, but no more than 15 occurrences during that 1-year period.)2. Frame-level products are tested to 50C (122F). Shelf-level pro
28、ducts are tested to 55C (131F).216 ASHRAE TransactionsIn the past, it was not unusual for mainframe computers to require regular maintenance for the computer, such as checking on the liquid cooling system for proper operation, flushing the system and topping off the fluid. This is not the case for C
29、O equipment as there is no regular service mainte-nance. Therefore, all the components need to be highly reli-able. TYPICAL CO ENVIRONMENTThe intent of this section is to illustrate examples of build-ings that house COs and common airflow management stan-dards to point out the difficulty in bringing
30、 liquid cooling to the racks and equipment for Teleco customers. Most of the COs in US are constructed before the 60s. Figure 6 shows the extreme of these buildings- some are very simple buildings and others are multi-floor high rises and the environmental control scheme can vary drastically. The on
31、es that are more modern are more amendable to deploy liquid cooled racks or equipment. There is a trend for consolidating DCs into a few super DCs around the world. These super DCs have the most modern and efficient power and cooling management systems, COs generally are closer to the communities th
32、at they serve and there are more of them. Therefore, CO operators prefer equip-ment that can fit into existing COs with minimal upgrades or changes to the infrastructure and only selectively in upgrading the ones that are absolutely necessary.Two good references for interior cooling layouts and clas
33、-sifications of COs are NEBS GR-3028-CORE, “Thermal Management in Telecommunications Central Offices” (Telcordia 2001) and ETSI TR 102 489, “Environmental Engi-neering (EE); European telecommunications standard for Figure 6 Examples of buildings housing COs.ASHRAE Transactions 217equipment practice;
34、 Thermal Management Guidance for equipment and its deployment” (ETSI 2008-09).For GR-3028-CORE, it defines room cooling (RC) clas-sifications as follow,Legacy RC-ClassesRC-Class VOH (Vertical Overhead Air-Distribu-tion)RC-Class HOH (horizontal Overhead Air-Distri-bution)Non-Legacy RC-ClassRC-Class H
35、DP (Horizontal Displacement Air-Distribution)RC-Class VUF (Vertical Under-Floor Air-Distri-bution)RC-Class NOH (Natural Convection Overhead Air-Distribution)Supplemental-Cooling (SC) ClassesFigure 7 and Figure 8 are examples of a couple of these classes. RC-Class VOH (Figure 7) is the typical and pr
36、eferred configuration for the large regional phone companies in the US.Figure 7 GR-3028-CORE RC-VOH Class.Figure 8 GR-3028-CORE RC-HOH Class.218 ASHRAE TransactionsIn ETSI TR 102 489, the following room cooling tech-niques are detailed and Figure 9 provides some examples from this specification.Cool
37、ing systems for a roomPassive coolingWarm air extraction (without cool air)Fresh air supply with natural release via pressure relief ventilatorsCool air blowing (with or without relative humid-ity control)OTHER CUSTOMER CONSIDERATIONSOther relevant factors for customers to consider using alternative
38、 cooling include:Capital expenditure (CapEx) investmentOperating expenditure (OpEx)Environmental impactEase of equipment installationBased upon one study published by Hannemann and Chu (Hannemann 2007), the CapEx investment could be almost Figure 9 ETSI TR 102 489 free blow and overhead distribution
39、 examples.ASHRAE Transactions 21930% more (Figure 10) for datacenter and CO operators for alternative cooling approaches. Besides the added investment cost, this is a major undertaking to upgrade the facility to bring chilled liquid to the rack, increasing overall facility cooling capacity and compe
40、ting space with other components, such as data cables and power infrastructure. As mentioned previously that network equipment life can be 10yrs or more. This also plays in the decision of the cooling approach for the CO, because the decision may require substantial up-front invest-ment for the chos
41、en cooling approach for the next 10yrs or more. For the details of the configurations studied please refer to the reference (Hannenmann 2007).For OpEx, alternative cooling can be an improvement for monthly energy usage, since liquid cooling is more efficient in extracting heat.All the equipment has
42、to be assessed for environmental friendliness. The cooling equipment and materials used need to be evaluated from cradle to grave to insure that they are benign to the atmosphere and amendable to recycling.Air cooling equipment is much easier to install than liquid cooling equipment. Storage and tra
43、nsportation of liquid cool-ing equipment also requires more up-front attention. How will the equipment behave at the more extreme environmental conditions for storage and transportation? Will the equipment be shipped without the fluid and charged at the installation site, or ship the fluid with the
44、equipment? If shipped with the fluid, it is necessary to be sure that the additional weight is within reasonable weight for transit. SUMMARYFrom a cooling performance perspective, without dispute liquid cooling is better than air cooling. However, this paper has provided some level of detail of othe
45、r important and prac-tical trade-offs that need careful planning and designing to address the concerns and constraints of the Telecom industry and CO environment. The one at the top of the list is designing a product with a self-contained redundant cooling system that meets the stringent NEBS enviro
46、nmental requirements, while maintaining a reasonable product form factor.It is also apparent from this discussion that it will not be a debate between air cooling and liquid cooling if there is no technology that will allow scaling performance without continued significant increase in power consumpt
47、ion. When power consumption reaches a limit, air cooling alone will be more difficult in providing proper airflow distributions, reasonable power consumption on the air movers (power consumption is proportional to the cube of air mover speed), and attaining proper acoustic levels for operators to wo
48、rk in. These limitations are applicable to both room cooling and equipment cooling.It is the hope of this presentation to guide potential vendors to address NEBS specific design requirements and environment.REFERENCES Kraus, A.D., and A. Bar-Cohen. 1983. Thermal Analysis and Control of Electronic Eq
49、uipment. Chapter 23.11, Con-duction cooling for an LSI package. St. Louis Park, Mn: ABC/ADK Books.Development Data Group. 1994-2007. World Development Indicators. ISBN 978-0-8213-7386-6, World Bank, Washington D.C., USA.U.S. EPA, Energy Star Program. 2007. Report to Congress on Server and Data Center Energy Efficiency Public Law 109-431. Botelho, B. 2007. Server power consumption rises unabated. 0,289142,sid80_gci1286282,00.html.Telcordia. 2006. GR-63-CORE, Network equipment, build-ing system (NEBS) requirement