1、 ETSI GS NFV-PER 001 V1.1.2 (2014-12) Network Functions Virtualisation (NFV); NFV Performance Essential, or potentially Essential, IPRs notified to ETSI in respect of ETSI standards“, which is available from the ETSI Secretariat. Latest updates are available on the ETSI Web server (http:/ipr.etsi.or
2、g). Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become, essential to the pre
3、sent document. Foreword This Group Specification (GS) has been produced by ETSI Industry Specification Group (ISG) Network Functions Virtualisation (NFV). Modal verbs terminology In the present document “shall“, “shall not“, “should“, “should not“, “may“, “may not“, “need“, “need not“, “will“, “will
4、 not“, “can“ and “cannot“ are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions). “must“ and “must not“ are NOT allowed in ETSI deliverables except when used in direct citation. ETSI ETSI GS NFV-PER 001 V1.1.2 (2014-12)61 Scope The
5、 present document provides a list of features which the performance and portability templates (Virtual Machine Descriptor and Compute Host Descriptor) should contain for the appropriate deployment of Virtual Machines over a Compute Host (i.e. a “telco datacentre“). In addition, the document provides
6、 a set of recommendations and best practises on the minimum requirements that the HW and hypervisor should have for a “telco datacentre“ suitable for data-plane workloads. The recommendations and best practises are based on tests results from the performance evaluation of data-plane workloads. It is
7、 recognized that the recommendations are required for VNFs supporting data plane workloads and that a small portion of the recommended list are not required in all cases of VNFs, such as VNFs related to control plane workloads. 2 References References are either specific (identified by date of publi
8、cation and/or edition number or version number) or non-specific. For specific references only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in
9、 the expected location might be found at http:/docbox.etsi.org/Reference. NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. 2.1 Normative references The following referenced documents are necessary for the applic
10、ation of the present document. 1 ETSI GS NFV 003: “Network Functions Virtualisation (NFV); Terminology for Main Concepts in NFV“. 2 ETSI GS NFV 001: “Network Functions Virtualisation (NFV); Use Cases“. 2.2 Informative references The following referenced documents are not necessary for the applicatio
11、n of the present document but they assist the user with regard to a particular subject area. i.1 Open Virtualisation Format Specification Version 2.1.0. NOTE: Available at: http:/www.dmtf.org/sites/default/files/standards/documents/DSP0243_2.1.0.pdf. i.2 Libvirt - The Virtualisation API. NOTE: Avail
12、able at: http:/libvirt.org/. i.3 AWSCloudFormation. NOTE 1: Available at: http:/ NOTE 2: AWSCloudFormation is an example of a suitable product available commercially. This information is given for the convenience of users of the present document and does not constitute an endorsement by ETSI of this
13、 product. i.4 Portable Hardware Locality. NOTE: Available at: http:/www.open-mpi.org/projects/hwloc/. ETSI ETSI GS NFV-PER 001 V1.1.2 (2014-12)7i.5 IETF RFC 2544: “Benchmarking Methodology for Network Interconnect Devices“. NOTE: Available at: http:/www.ietf.org/rfc/rfc2544.txt. i.6 IETF RFC 2889: “
14、Benchmarking Methodology for LAN Switching Devices“. NOTE: Available at: http:/www.ietf.org/rfc/rfc2889.txt. i.7 IETF RFC 3918: “Methodology for IP Multicast Benchmarking“. NOTE: Available at: http:/www.ietf.org/rfc/rfc3918.txt. i.8 IETF RFC 3511: “Benchmarking Methodology for Firewall Performance“.
15、 NOTE: Available at: http:/tools.ietf.org/rfc/rfc3511.txt. i.9 IEEE 1588: “IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems“. 3 Definitions and abbreviations 3.1 Definitions For the purposes of the present document the terms and definitions g
16、iven in ETSI GS NFV 003 1 and the following apply: NOTE: A term defined in the present document takes precedence over the definition of the same term, if any, in ETSI GS NFV 003 1. Network Function (NF): A functional building block within a network infrastructure, which has well-defined external int
17、erfaces and a well-defined functional behaviour. In practical terms, a Network Function is today often a network node or physical appliance. Network Functions Virtualisation Infrastructure (NFVI): The NFV-Infrastructure is the totality of all hardware and software components which build up the envir
18、onment in which VNFs are deployed. The NFV-Infrastructure can span across several locations, i.e. multiple N-PoPs. The network providing connectivity between these locations is regarded to be part of the NFV-Infrastructure. The NFV-Infrastructure includes resources for computation, networking and st
19、orage. Compute Host: A Compute Host is the whole server entity, part of an NFVI, composed of a HW platform (processor, memory, I/O devices, internal disk) and a hypervisor running on it. Compute Host Descriptor (CHD): A Compute Host Descriptor is a template to define a storage schema for the capabil
20、ities and up-to-date available resources which can be offered by a Compute Host server to VM Images at deployment time. Therefore, there will be one Compute Host Descriptor for every Compute Host, containing both its capabilities and its available resources. Virtualised Network Function (VNF): An im
21、plementation of an NF that can be deployed on a Network Function Virtualisation Infrastructure (NFVI). A VNF can be deployed as a set of Virtual Machines (VM), as SW components deployable, maintainable and manageable, emulating a single computer. Virtual Machine Instance (VM Instance): A Virtual Mac
22、hine Instance is a Virtual Machine already running in a Compute Host. Virtual Machine Image (VM Image): A VM Image is an executable SW component, subject to be deployed in Compute Hosts as one or several Virtual Machine Instances. Virtual Machine Configuration (VM Configuration): A VM Configuration
23、is the final configuration to be applied when deploying a VM Image on a specific Compute Host. ETSI ETSI GS NFV-PER 001 V1.1.2 (2014-12)83.2 Abbreviations For the purposes of the present document, the following abbreviations apply: AAA Authentication, Authorization, and Accounting API Application Pr
24、ogramming Interface ARP Address Resolution Protocol AS Application ServerBBU Base Band Unit BFD Bidirectional Forwarding Detection BGP Border Gateway Protocol BIOS Basic Input/Output System BNG Broadband Network Gateway BRAS Broadband Remote Access Server BW Bandwidth CDN Content Delivery Network CG
25、NAT Carrier Grade Network Address Translation CHD Compute Host Descriptor CIFS Common Internet File System COTS Commercial Off-The-Shelf CPE Customer Premises Equipment CPU Central Processing Unit C-RAN Cloud-Radio Access Network CVLAN Customer VLAN DCB Data Center Bridging DDoS Distributed Denial o
26、f Service DDR2 Double Data Rate type 2 DDR3 Double Data Rate type 3 DHCP Dynamic Host Configuration Protocol DMA Direct Memory Access DPI Deep Packet Inspection DSLAM Digital Subscriber Line Access Multiplexer DUT Device Under Test E-CPE Enterprise-Customer Premises Equipment ERPS Ethernet Ring Prot
27、ection Switching FFT Fast Fourier Transform FIB Forwarding Information Base FTP File Transfer Protocol FW Firewall GB GigaByteGE Gigabit Ethernet GGSN Gateway GPRS Support Node GPRS General Packet Radio Service GPS Global Positioning SystemGRE Generic Routing Encapsulation GUI Graphical User Interfa
28、ce GW Gateway HTML HyperText Markup Language HTTP HyperText Transfer Protocol HW Hardware I/O Input/OutputI-CSCF Interrogating-Call Session Control Function IMS IP Multimedia Subsystem IO Input Output IOMMU Input/Output Memory Management Unit IOTLB I/O Translation Lookaside Buffer IP Internet Protoc
29、ol IPC Inter-Process Communication IPoE IP over Ethernet IPsec IP security ISIS Intermediate System to Intermediate System ETSI ETSI GS NFV-PER 001 V1.1.2 (2014-12)9KPI Key Performance Indicator L4 Layer 4L7 Layer 7LPM Longest Prefix Match MAC Media Access Control MAN Metropolitan Area Network MANO
30、MANagement and Orchestration MGCF Media Gateway Controller Function MME Mobility Management Entity MMU Memory Management Unit MOS Mean Opinion Score MOS-AV MOS-Audio from that bottleneck analysis, a set of changes is proposed (SW design, SW config, OS config); then, these changes are applied and a n
31、ew trial is performed. With each trial and error, the aim is to improve the performance metrics (e.g. packets processing throughput) as much as possible. As a result of this iterative process, the actual HW bottlenecks emerge, and it is possible to provide a set of best practises and recommendations
32、 related to HW itself, SW design, and SW configuration. Once this analysis has been completed, a similar methodology is followed in the virtualised environment. In this case, the bottleneck analysis should take into account the virtualisation capabilities of the processor (memory, I/O, etc.), the vi
33、rtualisation capabilities of the I/O device (e.g. SR-IOV), the hypervisor type and its configuration and the guest OS configuration. Again, with each trial-and-error cycle, the aim is to improve the performance metrics as much as possible taking into account that the target goal is the performance o
34、n bare metal. As a result of this second iterative process, it is expected to determine new bottlenecks in HW and in Hypervisor, and to provide a set of recommendations related to HW virtualisation capabilities, hypervisor/host configuration, and guest OS configuration. In summary, the methodology c
35、an be seen as a two-step process: Step 1. Reach the maximum possible performance in bare metal, which should allow determining the actual HW and SW bottlenecks. Step 2. Reduce as much as possible the gap between the virtualised environment and the bare metal environment. This should allow identifyin
36、g new bottlenecks related to virtualisation capabilities. In this regard, it should be noted that the gap might change from a virtualisation environment to another one (e.g. from a full-virtualisation environment to a para-virtualisation environment) and even from an hypervisor to another, although
37、the qualitative effects from the interaction with HW are expected to be equivalent. ETSI ETSI GS NFV-PER 001 V1.1.2 (2014-12)18Annex C gathers test methodologies applicable for the performance evaluation of control plane and data plane workloads. In addition, one will notice that NFV introduces addi
38、tional variables such as distributed deployment across multiple physical servers and failure detection via multiple NFV orchestration components such as EMS, VNF Manager and VIM. The ability for DUT components to reside as multiple VMs further makes VM migration scenarios possible as part of operati
39、onal maintenance or resource optimization. This creates a need for testing these NFV specific scenarios to measure the impact of these variabilities on QoE and SLAs for various services and applications offered by the DUT. Some of these methodologies to test NFV-specific scenarios are also listed in
40、 annex C. 6 Bottleneck analysis and relevant features for high and predictable performance This clause presents the main conclusions for the tested workloads, highlighting relevant HW and SW features and how they affect performance. The present document covers specifically the analysis of data plane
41、 workloads in intermediate elements such as a BRAS. A detailed description of the tests results can be found in annex C. 6.1 Data plane workload in an intermediate element This clause discusses relevant HW and SW features to build and deploy VNF/VM Images dealing with data plane workloads such as pa
42、cket switching/forwarding and header encapsulation/de-encapsulation. These workloads are present in VNFs such as a BRAS, a P-GW, or a PE node. First, the relevant features to run VNFs dealing with this kind of data plane workloads on bare metal (e.g. including non-virtualised OS based software) are
43、presented in clause 6.1.1. These relevant features have been identified from experimental tests of a simplified IP edge node (i.e. BRAS) developed for the purpose. In these tests, it was possible to achieve throughput figures in the order of magnitude of PNFs in the market. As discussed in annex C,
44、the key is to remove the bottlenecks in the I/O and memory access to achieve that the VNF bottleneck is located in the CPU. Along clause 6.1.1, the different HW and OS features necessary to preserve this behaviour once the VNF is deployed are enumerated. Clause 6.1.2 discusses the relevant features
45、to obtain the highest possible performance on a fully-virtualised environment. These features were identified from experimental tests on the same BRAS prototype running on a VM. Tests followed the iterative trial-and-error procedure taking as target the throughput figures obtained in the bare metal
46、tests. Same techniques as bare metal were applied on the virtualised environment to remove the common bottlenecks in the I/O and memory. However, a new bottleneck arose due to the need of double memory address translations in the virtualisation environment. After applying optimizations such as the H
47、W support for Virtualisation, the gap with the bare metal was only due to the lack of HW virtualisation support for memory translation of large I/O pages. This limitation was solved in the latest processor generations, so that the performance gap in the virtualised environment is negligible when com
48、pared to bare metal. 6.1.1 High and predictable performance on bare metal There are a number of features from processor, memory, I/O devices, the OS, and the SW code itself which make the difference to achieve a high and predictable performance in bare metal. Unsurprisingly, underlying server HW cha
49、racteristics have a deep impact on the performance. Parameters such as processor architecture, number of cores, clock rate, size of the internal processor cache(s), memory channels, memory latency, bandwidth of peripheral buses and inter-processor buses, instruction set, etc. have a strong impact on the performance of the specific application or VNF running on that HW. ETSI ETSI GS NFV-PER 001 V1.1.2 (2014-12)19SW design is also of utmost relevance to take advantage of all capabilities offered by multi-core processor arch