ImageVerifierCode 换一换
格式:PDF , 页数:7 ,大小:1.82MB ,
资源ID:455534      下载积分:10000 积分
快捷下载
登录下载
邮箱/手机:
温馨提示:
如需开发票,请勿充值!快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。
如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝扫码支付 微信扫码支付   
注意:如需开发票,请勿充值!
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【http://www.mydoc123.com/d-455534.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(ASHRAE NY-08-008-2008 Optimizing Data Center TCO Efficiency Metrics and an Infrastructure Cost Model《优化数据中心总拥有成本(TCO) 效率度量和基础设施成本模型》.pdf)为本站会员(tireattitude366)主动上传,麦多课文库仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知麦多课文库(发送邮件至master@mydoc123.com或直接QQ联系客服),我们立即给予删除!

ASHRAE NY-08-008-2008 Optimizing Data Center TCO Efficiency Metrics and an Infrastructure Cost Model《优化数据中心总拥有成本(TCO) 效率度量和基础设施成本模型》.pdf

1、44 2008 ASHRAE ABSTRACT This paper provides several metrics to characterize theefficiency of data centers. Power Usage Effectiveness (PUE)measures the fraction of the total facility power devoted to ITwork. Compute Power Efficiency (CPE) measures the overallefficiency of the data center, considering

2、 both power and cool-ing and the utilization of the data center IT equipment. Thepaper describes several surveys characterizing PUE for enter-prise data centers. Based on the data collected to-date, a PUEof 2.0 is expected for a world-class facility. In the near future,data center IT power consumpti

3、on is likely vary more dramat-ically in response to workload changes as IT hardware powermanagement technologies are adopted broadly. Data centerpower and cooling infrastructure must be designed with suchvariation in mind so that the overall efficiency is not impacted.A simple data center infrastruc

4、ture and energy cost modelis presented to exposes the dramatic change in the relativeimportance of hardware costs to infrastructure and energycosts. Accurate models capturing these costs must be consid-ered by organizations seeking leadership IT cost structures. INTRODUCTIONUntil recently, data cent

5、er energy and physical infrastruc-ture costs have not been a major issue for most IT organiza-tions. The focus has historically been placed on hardware costsand uptime. As hardware costs have decreased and IT equip-ment power density has increased over the last decade, energyand infrastructure costs

6、 have become a significant fraction ofthe IT budget, in some cases the largest fraction. In order tooptimize IT cost structures, data center efficiency metrics andinfrastructure models are required. This paper will discussrecently proposed data center efficiency metrics and presentthe results from t

7、wo surveys to characterize the efficiency oftodays state-of-art data center. In addition, this paper will alsointroduce a new metric to characterize the overall computa-tional efficiency of the data center.Figure 1 shows the IT equipment industry power trendcurves developed by the ASHRAE TC9.9 techn

8、ical committee(ASHRAE, 2005). The curves illustrate a continuous rise in ITpower for all hardware form factors and types through the year2014. At this point, there is no reason to expect that trend willnot continue beyond 2014. As server power density has grown, hardware costs haveremained roughly c

9、onstant and performance has growndramatically. Figure 2 illustrates the general trends in serverperformance and performance per Watt between 1999 and2006. Over this 7-year timeframe, server performance hasincreased by approximately 75 times and performance perwatt has increased 16 times, nearly doub

10、ling every 2 years(Belady, 2007). While hardware costs have changed little,servers have become significantly more efficient year overyear, both from an energy perspective and a price-performanceperspective. Over this same timeframe, compute demand hasgrown at an even faster pace, due to the massive

11、build-out ofthe internet infrastructure and falling cost of compute. As thepure performance and price-performance metrics of commod-ity servers has improved, companies have identified moreapplications with an acceptable ROI. The consequences of theglobal build-out of IT equipment to data center powe

12、r andcooling began to emerge several years ago and are now aconcern for most IT organizations, a fact well-documented byanalysts and the technical and business press. The deploymentof denser, higher-power equipment to meet IT demand hascaused data center power and cooling issues. As an example toOpt

13、imizing Data Center TCO: Efficiency Metrics and an Infrastructure Cost ModelChristopher G. Malone, PhD Christian L. Belady, PEAssociate Member ASHRAEChristopher G. Malone is the thermal technologies architect for Hewlett-Packards Business Critical Server group, Roseville, CA. ChristianL. Belady is t

14、he principal power and cooling architect at Microsoft, Redmond, WA.NY-08-0082008, American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc. (www.ashrae.org). Published in ASHRAE Transactions, Volume 114, Part 1. For personal use only. Additional reproduction, distribution, or t

15、ransmission in either print or digital form is not permitted without ASHRAEs prior written permission.ASHRAE Transactions 45characterize the scope of the problem, Gartner recentlypredicted that half of the worlds data centers will run out ofpower by the end of 2008 (Gartner Inc., 2006). Within the l

16、astyear, new but related environmental impact issues haveemerged for IT organizations. Curtailment of power genera-tion growth by utilities, rising energy costs, and considerationsof the carbon footprint impact of data centers serve as anotherimpetus for IT organizations to examine data center effic

17、iency.Determining the total cost of ownership for operating adata center is a challenge for most IT departments. Hardwareand software costs are rigorously tracked and assessed, butthere is a general lack of analytical frameworks for determin-ing operational costs including infrastructure costs (powe

18、rconditioning, backup, and distribution and the air conditioningequipment) and the energy costs associated with powering andcooling the IT equipment. Figure 3 compares these costs to a1U server hardware cost (Belady, 2007). In the 1990s, datacenter costs were driven primarily by IT hardware costs. O

19、vertime, however, infrastructure and energy costs have come todominate the costs. The impact of this change is still not wellunderstood by the general IT industry for various reasonsincluding the lack of appropriate modeling tools and the typi-cal division of responsibilities in most organizations b

20、etweenthe IT departments specifying hardware and facilities depart-ments who build and manage data centers. This paper aims toprovide metrics and models to help quantify these costs.DATA CENTER EFFICIENCY METRIC The rapid growth of the global compute infrastructure hascreated profound changes in pow

21、er density and the relativecost of the physical infrastructure and power to the IT hard-ware cost, as illustrated in Fig. 3. These changes demandmore-efficient data center infrastructure design to allow forcontinued improvements in the total cost of ownership for IT.Until recently, there was no metr

22、ic which characterized theefficiency of the data center. To address this issue, Malone andBelady (2006) proposed a data center efficiency metric calledPower Usage Effectiveness (PUE). This metric is defined asthe ratio of the total facility power in the data center over thepower of the IT equipment

23、on the raised floor: (1)The total facility power is the power delivered to operatethe data center, including power for operating the IT equip-ment and the cooling infrastructure. This includes the powerfor the IT equipment, switch gear, Uninterruptible PowerSupply (UPS), chiller, cooling tower, air

24、conditioners, liquidconditioners, etc. The IT equipment power is defined as theactual line cord power drawn by all the IT equipment in thedata center. It does not include the power losses associatedwith conditioning and reducing the voltage from the utility.The intent is to consider only the power u

25、sed for IT equipment.PUE is useful for determining the total power consump-tion for IT equipment. For instance, the power consumption ofan individual server is typically well known through specifi-cation or direct measurement, and is often used for purchasedecisions. The total power consumption asso

26、ciated with oper-ating the server, including power distribution losses and powerfor the cooling infrastructure, may be determined using PUE.This is important for a comprehensive assessment of the totalcost of ownership for hardware in a particular data center. PUEis the multiplier for determining th

27、e real energy cost for the IThardware. The annual energy cost for a server, storage unit,etc., is determined using the following equation:Annual Energy Use Cost = (8760 hrs/yr) x (Average Utility Rate in $/kW-hr) x (Equipment Power in kW) x PUE (2)The Green Grid (2007) has adopted PUE as the industr

28、y-standard data center efficiency metric with the intent to usePUE broadly for benchmarking and determining total energycosts, as shown in equation (2). DATA CENTER EFFICIENCY STUDIESData center efficiency is a relatively new concept for theIT industry. Until recently, power and cooling technologies

29、and associated costs were of secondary importance. Now thatpower and cooling are constraining data center utilization andforcing the construction of new data centers at a cost of $1000/ft2or more, it is imperative to optimize the data center effi-ciency to make best use of the facility. Given that t

30、his is a newFigure 1 ASHRAE TC 9.9 equipment power projection(ASHRAE 2005).Figure 2 Raw performance and performance/watt trendsfor a typical server (Belady 2007).PUETotal Facility PowerIT Equipment Power-=46 ASHRAE Transactionsconcern, IT departments lack a good understanding of typicaldata center P

31、UE ranges and a target PUE value for efficientdata centers. In the paper introducing PUE (Malone andBelady, 2006), estimates were provided on the distribution ofPUE in enterprise data centers based on the Uptime InstitutesPUE estimate for its client distribution. The data is summa-rized in Table 1.

32、This study suggests that 85% of the worldsdata centers have a PUE of 3.0, which is very inefficient.Expressed in terms of power, this means that 1W of IT powerrequires 2W of power for the cooling equipment and powerconditioning and delivery. The majority of the 2W is used forcooling. Typical power c

33、onversion and conditioning losses areapproximately 14% of the IT power. The target PUE value forreasonably efficient data center was judged to be 2. The idealefficiency was 1.6 or less. The data shown in Table 1 was an estimate, lackingcomprehensive experimental results. In this paper, we presentthe

34、 results of two data center power consumption investiga-tions in an effort to better quantify the PUE of a typical datacenter. Figure 4 presents PUE data from an extensive survey of19 US data centers completed by Lawrence Berkley NationalLabs (Greenberg et al., 2006). This work indicates that theran

35、ge of PUE can be anywhere from 1.3 to 3.0 with an averagevalue of about 1.9. The range of PUE values is similar to theresults from Table 1, but the average is significantly lower.This is likely due to many factors, including: 1. Relatively high quality of the cooling solutions of thedata centers con

36、sidered in the study as compared to thetypical data center. 2. External ambient conditions during measurements.3. The period of time over which the measurements wereconducted.4. Whether air- and/or liquid-side economizers were in useduring the measurements. 5. IT workload variations during the measu

37、rements. 6. Size and power density of the data centers. 7. Variations in power distribution and conditioning tech-nologies and implementations.8. Variations in cooling infrastructure technologies andimplementations.9. Whether the power measurements were made in amanner consistent with the PUE defini

38、tion.In the future, this type of information needs to be capturedto understand why some data centers operate more efficientlythan others.HP conducted a detailed power and cooling assessmentfor 13 enterprise data centers located around the world(Personal communications, 2007). The data was used to de

39、ter-mine PUE and the results are summarized in Figure 5. Theaverage PUE was 2.1. The data center designs varied dramat-ically: some used chilled water for cooling, others directrefrigerant expansion (DX); the floor size, total power, andpower density (W/ft2) ranged by an order of magnitude.Despite t

40、hese variations, the PUE results were similar for mostof the data centers and a strong correlation between PUE andthese variables was not readily apparent. Data center 8 wassignificantly worse than the average facility. The PUE assess-ment allows for a simple identification of inefficient datacenter

41、s which may be targeted for improvements or closed.Without the holistic measure of the total facility efficiencyprovided by PUE and a target value for an average or leader-ship efficiency, IT organizations would not be able to easilyand strategically target resources for improvement. Outlierdata cen

42、ters, such as data center 8, may not be recognized.The results of these two studies indicate that properlydesigned and managed data centers have a PUE of approxi-mately 2.0. The authors recommend that all IT departmentsevaluate their data center PUE values and that the industrycreates a comprehensiv

43、e PUE database to better characterizethe range of expected values as a function technology choiceand design practices. There is a significant opportunity toimprove upon the 2.0 average value identified in these stud-ies. There are a number of new data center cooling solutionsthat offer dramatic impr

44、ovement in both supported powerdensity and efficiency. For example, Dynamic Smart Cooling(DSC) has been demonstrated to reduce power for a datacenter cooling solution by 45% (Patel et al., 2006). DSC is anadvanced data center thermal management solutions whichcontrols the output from data center CRA

45、C units to minimizeenergy consumption while meeting IT equipment temperatureand air flow requirements. Reducing a data center PUE from2.0 to the ideal PUE value of 1.6 or less listed in Table 1 isfeasible with such solutions. The energy savings for large datacenters is considerable. For example, red

46、ucing a data centerPUE from 2 to 1.6 for 1 MW of IT equipment power results ina total facility power savings of 400 kW or $350,000 per yearat a power cost of $0.10/kW-hr. The power savings might alsobe re-purposed to support additional IT equipment in the datacenter.COMPUTE POWER EFFICIENCY (CPE)PUE

47、 measures how power is utilized in the data center,with the aim of helping IT organizations maximize the fractionof power devoted to IT equipment. To fully address the effi-Figure 3 Annual amortized costs in the data center for a1U server (Belady 2007).ASHRAE Transactions 47ciency of the data center

48、, IT equipment utilization must beconsidered along with IT power. The metric Compute PowerEfficiency (CPE) is proposed to capture this broader definitionof efficiency:(3)This metric measures the fraction of the total facilitypower used for IT work. It connects IT hardware efficiencywith the data cen

49、ter infrastructure efficiency. The metric maybe applied to an individual server or an entire data center,provided a weighted average utilization may be determined. Inmany data centers, enterprise server utilization, typically char-acterized by processor usage, is often 20% or less. This meansthat for a well-managed data center with a PUE of 2.0, the CPEis 10% or 1W for every 10W of utility power is actually usedfor computing. Data centers with a PUE of 3.0 have a CPE of6.7%. CPE indicates that a tremendous amount of energy iswasted

copyright@ 2008-2019 麦多课文库(www.mydoc123.com)网站版权所有
备案/许可证编号:苏ICP备17064731号-1