ImageVerifierCode 换一换
格式:PDF , 页数:6 ,大小:267.65KB ,
资源ID:455761      下载积分:10000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-455761.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(ASHRAE OR-16-C037-2016 Data Center Great Debate Competing Ideas for Maximizing Design Efficiencies.pdf)为本站会员(dealItalian200)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

ASHRAE OR-16-C037-2016 Data Center Great Debate Competing Ideas for Maximizing Design Efficiencies.pdf

1、Dan Comperchio is a Senior Engineer at Willdan Energy Solutions in Chicago, IL. Sameer Behere is an Energy Engineering Manager at Syserco in Fremont, California. Data Center Great Debate: Competing Ideas for Maximizing Design EfficienciesDan Comperchio, PE Sameer Behere, PE Member ASHRAE Member ASHR

2、AE ABSTRACT Data centers have an extensive range of complicated system design choices that can often seem overwhelming when deciding on the best way to optimize system design for reliability and energy efficiency. Is an air-side or water-side economizer system better, or should an indirect system be

3、 used over a wet-bulb economizer design? Is the industry moving away from raised floor designs to installing server cabinets directly on slab? High-level decisions can be complicated and diving further into the details can reveal even more trade-offs and choices. Should containment be done on the co

4、ld-aisle or hot-aisle? Is it better to select units with electronically commutated (EC) fans or VFD-equipped motors? The authors present a range of topics for debate in data center design, discuss the strengths and weaknesses of each, and review their applicability and limiting factors. These highly

5、 contested topics are being debated among designers and operators, end users and owners. Readers are encouraged to participate in these discussions to contribute to the variety of viewpoints on complex data center systems design. INTRODUCTION Data center owners, operators and designers have a wide r

6、ange of options in system configurations and equipment selections when designing a data center. The industry is moving towards higher temperatures within the critical environments; however there is a large variation in the implementation of recommended levels and upper limits on operating conditions

7、 Additionally, mechanical systems are evolving to handle increased densities within the data center as well as minimize energy use during heat rejection. In this paper, the authors present a number of competing ideas and strategies found commonly within the industry, examine their benefits and draw

8、backs, and provide observations drawn from operational experience. Additional factors are also presented for consideration when balancing system costs, performance, and energy efficiency. SYSTEM CONSIDERATIONS Economization: Air-Side versus Water-Side One of the largest factors in a data centers abi

9、lity to operate efficiently is the use of economization strategies that allow for the aggressive use of ambient conditions for cooling the white space. The current edition of ASHRAE Thermal Guidelines for Data Processing Environments allows for temperatures of new, modern server and IT equipment to

10、operate with inlet temperatures as high as 89.6F-95F (32C-35C) with specialized equipment operating up to 104F (40C). However the recommended temperature for most equipment classes is 80.6F (27C). With these elevated temperatures a significant amount of economization can occur, although there is con

11、siderable debate regarding the optimal solution. Water-side Economization. A common feature in central plants with water-cooled chillers is the use of integrated economization through the use of heat exchangers and cooling towers. This leads the design towards efficient water-cooled centrifugal chil

12、lers, which can be designed at a higher level of efficiency with elevated chiller water temperatures. Also, this practice mitigates concerns about particulate and gaseous contamination within the data center as air is not brought directly into the facility. However, the use of heat exchangers increa

13、ses the approach between chilled water temperatures and ambient conditions, and it also includes the use of multiple pump line-ups. While this can be reduced by sizing equipment for low approaches and the use of variable frequency drives on motors, there is an energy penalty with combined approaches

14、 across the cooling tower, heat exchanger and cooling coil. Another area of growing concern in the industry is the added water consumption with evaporative cooling towers. For every 1MW of IT load, approximately 8.5 gallons (32.2 L) of water is lost per minute to evaporation, blow down and drift fro

15、m cooling towers. Direct Air-side Economization. This alternate to water-side economization is often considered for reasons of infrastructure first cost, facility size or location. It offers similar benefits of reduced mechanical cooling energy when ambient conditions allow. By utilizing dry bulb or

16、 enthalpy controls certain geographical locations may have more hours available for the same room conditions. This is due in large part to the differing control point (enthalpy versus wet bulb temperature), as well as a lack of approach losses in cooling towers, heat exchangers and cooling coils. Ho

17、wever, considerable research has been done on the impacts of particulate and gaseous contamination in a data center from air-side economization, which introduces reliability concerns in regions with poor air quality and high-sulfur coal power plants such as China, India, and Southeast Asia. Addition

18、ally, research indicates humidity control (both lower limits and rate of change) plays an important role in corrosion rates at the component level of servers and IT equipment. The lower limits on humidity can introduce limitations on the system in geographical locations where outside air temperature

19、s are not suitable for direct introduction into the facility thus limiting the availability of economization. The physical location also plays a role in the determination of air-side economization for data centers, as high levels of particulate from highways and airports, high levels of salt carryov

20、er from an ocean, or fine dust or pollen can contribute to higher component failure rates with direct airside economization systems. Indirect Air-side Economization. A number of designs and systems have emerged in response to concerns over direct air-side economization while still maximizing the ben

21、efits of extended availability of an air-side system, as well as increasing the hours beyond a traditional enthalpy control design. Indirect air-side economization generally utilizes a fixed plate heat exchanger or a sensible-only heat wheel to transfer energy between an outside airstream and the re

22、turn air from the data center. This is similar to a standard commercial energy recovery wheel but without the need to transfer latent energy. Outside air is used to cool recirculated return air, as opposed to using exhaust air to pre-heat or pre-cool ventilation outside air. While dependent on the s

23、pecifications of the wheel or flat plate, these systems tend to transfer energy at a high effectiveness rate and greatly reduce or eliminate the introduction of outside air into the data centers. Typically, there is no requirement to have bypass air at the wheel as there is no outside air being intr

24、oduced into the space; it is only used as a heat sink. Adiabatic Cooling. A variation on air-side economization systems relies on adiabatic cooling via injecting water into the incoming airstream to lower the dry bulb temperature, which further increases the hours of economization. Since data center

25、s can tolerate higher temperatures than comfort cooling systems, this method of cooling outside air expands the available hours considerably. A number of manufacturers have adiabatic cooling components which can be integrated into IASE equipment as a standard offering. Airflow Management Considerati

26、ons Incorporating airflow management within data centers is rapidly evolving into best practice, taking the data center model beyond a simple hot and cold aisle configuration which is now a must in any modern data center. Airflow management generally focuses on separating the cold supply air enterin

27、g the IT equipment and the hot exhaust air coming off of the discharge of the equipment. In doing so, bypass air is greatly reduced. This allows the supply air to be more fully utilized for its intended purpose, rather than having to provide additional airflow to overcome the effects of blended temp

28、eratures that result from mixing the airstreams. In designing a data center layout, the reduction in bypass air is an important consideration, as sizing equipment based on ideal scenarios (i.e., no bypass air in the system) can result in operational concerns. While the end goal is to reduce bypass,

29、without careful consideration of the consequences of various ITE configurations and densities, issues can arise when the facility goes live. One method of managing this process is through computational fluid dynamics (CFD) to model various scenarios and configurations and guide the design process to

30、 minimize bypass air. Further considerations should be made to incorporating brush guards, grommets, requiring blanking panels in empty rack locations, and proper floor sealing to avoid crossover airflow. Hot Aisle Containment. A common method of separating the hot and cold airstreams is through the

31、 use of containment on the hot aisle of the data center. This can be accomplished a number of different ways, with common installations using flexible curtain-style partitions or hard paneling with doors. With hot aisle containment the partitions can extend up to the ceiling. This is not required, h

32、owever, often because of local fire code considerations, as well as the buoyance effect of the return air which is less likely to short-circuit the system in the last 12-18” (30.5-45.7 cm) from the ceiling when a plenum return is present. The drawback to containing the hot aisle only is that the ent

33、ire remainder of the white space will be conditioned to the supply air temperature instead of only the portion of the data center which is directly in front of the ITE racks. However, there is a safety margin in the event of a failure on the containment system (such as a door remaining open on an ai

34、sle) as the room volume of cold air is larger in comparison to the volume or return air within the contained portion. Cold Aisle Containment. The opposite approach to containment is to enclose the cold aisle, minimizing the control volume and providing the ITE an isolated source for supply air, as c

35、old aisle containment typically requires end caps, such as doors, as well as an enclosed overhead to avoid supply air bypassing the top to the cabinets. In this configuration, the exhaust air from the servers is discharged freely into the data center which can introduce elevated temperatures in the

36、remainder of the raised floor environment. While it may be unpleasant for the occasional occupant, in the similar failure scenario above of an aisle door remaining open it can possibly lead to a shorter buffer time as the cold aisle temperature can begin to increase if the warmer room volume migrate

37、s into the cold aisle. Raised Floor Design. The most common design for a data center is to have a raised floor to serve as a supply plenum for delivering cold air to the ITE which is also utilized to route power and network cabling. Air is distributed through perforated tiles located adjacent to equ

38、ipment and is generally returned through the room or through a return air plenum. This design is well understood and easy to deploy, allowing for flexibility in the location of ITE and supply tiles, adjustments to the tile openings, and separation of the cabling in the data center. However, raised f

39、loor designs are being questioned in the industry. They carry a large cost and are often no longer addressing the issues they were intended to correct. In existing facilities, eliminating the raised floor is unlikely to be an option, however in new facilities it is a strategy that should be question

40、ed. On-Slab White Space. While eliminating the raised floor within the data center may seem like a risky design decision, the industry is starting to adopt this as a viable option to general data center layouts. Additionally, the telecommunications industry has utilized on-slab critical spaces for d

41、ecades. The design eliminates or greatly reduces floor loading concerns, has a large first-cost savings on floor structures and underfloor fire protection and has been suggested as a more efficient method of cooling. The setup can have drawbacks as the floor structure cost savings can be offset by t

42、he installation of ductwork, static pressure may increase on equipment, and there is less flexibility in rearranging the supply grille locations. These drawbacks can be mitigated or eliminated by considering designs that do not rely on supply ductwork and supplying air directly into the environment

43、with containment separating the exhaust from the supply-side of the ITE racks. CONSIDERING OPERATING COSTSEnergy savings and first cost are important considerations in any facility design and are paramount in data centers as the balance between capital expenditures, time-to-market and operating expe

44、nse must be determined to successfully design a data center to the environment in which it will be located. This includes the utility structure on energy and demand rate structures as much as the geographic climate. Energy Consumption Charges. The most inherent cost savings in modern data center des

45、igns is reducing the operating costs of the facility in regards to utility expenditures. This is done through the selection and design of highly efficient systems, and is characterized in the industry by a lower Power Utilization Effectiveness (PUE). While not a trivial task, this is the easier util

46、ity charge to manage as consumption charges are generally straightforward in the application of energy use reduction to utility bill reduction. Energy rates will vary by state and utility, impacting the design decision for the facility. However, unless there is freedom to locate the data center with

47、in a geographic region rather than within a certain state or near a major metropolitan center; the utility rate will be a fairly fixed constraint that can influence the systems for the data center and the full impact should be considered during design. Figure 1 Average electricity consumption charge

48、s across the continental United States, designated by state. Peak Demand Considerations. While the annual energy consumption is often reviewed and factored into a life-cycle cost analysis, it is equally important to consider the demand charges the facility may incur as a result of the system choice.

49、 A system that relies on high levels of economization to offset compressor efficiencies may be an attractive option from an operational efficiency standpoint (i.e., lowered annual compressor energy with increased economizer hours). During peak times, however, the costs incurred due to higher power demand may offset consumption savings. Additionally, designs with a high peak demand (PUE) result in a larger first cost as electrical infrastructure must be sized accordingly, requiring an increasing in the size or quantity of generators, switchgear and transformers. The additional

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1