1、 ETSI GS NFV-IFA 001 V1.1.1 (2015-12) Network Functions Virtualisation (NFV); Acceleration Technologies; Report on Acceleration Technologies Essential, or potentially Essential, IPRs notified to ETSI in respect of ETSI standards“, which is available from the ETSI Secretariat. Latest updates are avai
2、lable on the ETSI Web server (http:/ipr.etsi.org). Pursuant to the ETSI IPR Policy, no investigation, including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are,
3、 or may be, or may become, essential to the present document. Foreword This Group Specification (GS) has been produced by ETSI Industry Specification Group (ISG) Network Functions Virtualisation (NFV). The present document gives an overview to the series of documents covering the NFV Acceleration. T
4、he trademarks mentioned within the present document are given for the convenience of users of the present document and do not constitute an endorsement by ETSI of these products. Modal verbs terminology In the present document “shall“, “shall not“, “should“, “should not“, “may“, “need not“, “will“,
5、“will not“, “can“ and “cannot“ are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions). “must“ and “must not“ are NOT allowed in ETSI deliverables except when used in direct citation. ETSI ETSI GS NFV-IFA 001 V1.1.1 (2015-12) 5 1 Sc
6、ope The present document provides an overview of NFV acceleration techniques and suggests a common architecture and abstraction layer, which allows deployment of various accelerators within the NFVI and facilitates interoperability between VNFs and accelerators. The present document also describes a
7、 set of use cases illustrating the usage of acceleration techniques in an NFV environment. 2 References 2.1 Normative references References are either specific (identified by date of publication and/or edition number or version number) or non-specific. For specific references, only the cited version
8、 applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. Referenced documents which are not found to be publicly available in the expected location might be found at http:/docbox.etsi.org/Reference. NOTE: While any hyperlinks included i
9、n this clause were valid at the time of publication, ETSI cannot guarantee their long term validity. The following referenced documents are necessary for the application of the present document. Not applicable. 2.2 Informative references References are either specific (identified by date of publicat
10、ion and/or edition number or version number) or non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the referenced document (including any amendments) applies. NOTE: While any hyperlinks included in this clause were valid at the t
11、ime of publication, ETSI cannot guarantee their long term validity. The following referenced documents are not necessary for the application of the present document but they assist the user with regard to a particular subject area. i.1 ETSI GS NFV 003: “Network Functions Virtualisation (NFV); Termin
12、ology for main concepts in NFV“. i.2 ETSI GS NFV-INF 003: “Network Functions Virtualisation (NFV); Infrastructure; Compute Domain“. i.3 ETSI GS NFV-INF 005: “Network Functions Virtualisation (NFV); Infrastructure; Network Domain“. i.4 ETSI GS NFV-IFA 002: “Network Functions Virtualisation (NFV); Acc
13、eleration Technologies; VNF Interfaces Specification“. ETSI ETSI GS NFV-IFA 001 V1.1.1 (2015-12) 6 3 Definitions and abbreviations 3.1 Definitions For the purposes of the present document, the terms and definitions given in ETSI GS NFV 003 i.1 and the following apply: para-virtualisation: virtualisa
14、tion technique in which guest operating system virtual device drivers include software that works directly with specific hypervisor back-end interfaces for device access NOTE: The virtual device interface is often similar to but not identical to the underlying hardware interface. The intent of para-
15、virtualisation is to improve performance compared to the host fully emulating non-virtualised hardware interfaces. 3.2 Abbreviations For the purposes of the present document, the abbreviations given in ETSI GS NFV 003 i.1 and the following apply. An abbreviation defined in the present document takes
16、 precedence over the definition of the same abbreviation, if any, in ETSI GS NFV 003 i.1. AAL Acceleration Abstraction Layer APU Accelerated Processing Unit ARP Address Resolution ProtocolASIC Application-Specific Integrated Circuit CoMP Coordinated MultiPoint radio CPU Central Processing Unit DOPFR
17、 Dynamic Optimization of Packet Flow Routing FPGA Field-Programmable Gate Array GENEVE GEneric NEtwork Virtualisation Encapsulation GPU Graphic Processing Unit HWA Hardware Acceleration IKE Internet Key Exchange protocol NFV Network Functions Virtualisation NFVI NFV Infrastructure NPU Network Proces
18、sor Unit NV-DIMM Non-Volatile Dual In-line Memory Module NVGRE Network Virtualisation using Generic Routing Encapsulation NVMe Non-Volatile Memory express OSPF Open Shortest Path First OVSDB Open vSwitchDatabase RDMA Remote Direct Memory Access RIP Routing Information Protocol SoC System on Chip SRT
19、P Secure Streaming Real-time Protocol TRILL Transparent Interconnection of Lots of Links vCPE virtual Customer Premises Equipment VNF Virtualised Network Function VPN Virtual Private Network VxLAN Virtual extensible Local Area Network 4 Overview 4.1 General The NFV Infrastructure (NFVI) includes the
20、 totality of all hardware and software components that build up the environment in which virtualised network functions (VNFs) are deployed. However, some VNFs may require some form of acceleration to be provided by the NFVI to meet their performance goals. While industry standard IT servers can supp
21、ort a large range of NFV use cases, some use cases are more challenging, especially relating to VNFs that need to meet certain latency or SLA requirements. ETSI ETSI GS NFV-IFA 001 V1.1.1 (2015-12) 7 However, acceleration is not just about increasing performance. NFVI operators may seek different go
22、als as far as acceleration is concerned: Reaching the desirable performance metric at a reasonable price. Best performance per processor core/cost/watt/square foot, whatever the absolute performance metric is. Reaching the maximum theoretical performance level. NOTE: In this context, “Performance“ c
23、an be expressed in throughput, packets per second, transactions per second, latency. To allow multiple accelerators to co-exist within the same NFVI, and to be used by multiple virtualised network function components (VNFCs), several virtualisation technologies exist in the industry and they will co
24、ntinue to evolve. In general an acceleration abstraction layer (AAL) is used to aid portability of application software (see figure 1). The role of an AAL is to present a common interface for use by a VNFC, independent of the underlying accelerators. Different implementations of the AAL and bindings
25、 to different accelerator implementations can be provided without requiring changes to the independent VNFC code. All code which is dependent on the accelerators is within the AAL. Figure 1: Use of acceleration abstraction layer (AAL) to enable fully portable VNFC code across servers with different
26、accelerators This AAL is a normal feature of operating systems and is normally implemented using a common driver model and hardware specific drivers. In the NFVI, the virtualisation layer in charge of compute and storage resources is typically implemented in the form of a hypervisor which plays the
27、role of a base operating system which interfaces to the hardware. The hypervisor then provides common and uniform virtual hardware to all virtual machines so that VNFC code is fully portable. In order to achieve full portability of VNFC code, the AAL can be entirely contained in the hypervisor. In t
28、his way, the virtualised accelerator presented to the VNFC is generic so that the host operating system of the VNFC can use generic drivers, without requiring awareness of the AAL. However the performance of fully independent VNFCs may be less than desired because the hypervisor needs to emulate rea
29、l hardware, so an alternate model known as para-virtualisation also exists. With para-virtualisation, AAL code is also present in the VNFC and is adapted to specific virtualisation drivers and hardware. ETSI ETSI GS NFV-IFA 001 V1.1.1 (2015-12) 8 It is the intention of the present document to define
30、 and promote acceleration architectures that aid portability with the use of an AAL in the guest, host or both. The specification of an AAL when other forms of virtualisation are used and acceleration without use of an AAL are outside the scope of the present document. The present document does not
31、intend to preclude any specific acceleration architectures from VNF deployments. NFV Acceleration can be done by hardware, software or any combination thereof. The AAL should not prescribe a specific hardware or software implementation, but enable a spectrum of different approaches (including pure s
32、oftware). 4.2 Hardware Acceleration Hardware acceleration is the use of specialized hardware to perform some function faster than is possible by executing the same function on a general-purpose central processing unit (CPU) or on a traditional networking (or other I/O) device (such as network interf
33、ace controller (NIC), switch, storage controller, etc.). These functions may be correlated to the three NFVI domains and subsequently address Compute, Network and Storage Acceleration. By using the term “functions“, the present document abstracts the actual physical implementation of the hardware ac
34、celerator. This hardware accelerator covers the options for ASICs, network processors, flow processors, FPGAs, multi-core processors, etc. to offload the main CPU, and to accelerate workload performance. With AAL, multiple hardware accelerators can be presented as one common and uniform virtualised
35、accelerator to the accelerating function and thus can work simultaneously for that function. 4.3 Software Acceleration In addition to the rich selection of hardware acceleration solutions, modern, high performance CPU (as well as GPU or APU) silicon enables an alternative acceleration option - softw
36、are accelerations. Software acceleration provides a set of one or more optional software layers that are selectively added within elements of an NFV deployment (e.g. Compute, Hypervisor, VNF, etc.) to augment or bypass native software within a solution. Together, these new layers bring improved capa
37、bilities (e.g. increased network throughput, reduced operating overhead) which result in measurable improvements over standard, un-accelerated implementations. Software acceleration frameworks and software accelerators are the two major components built upon these layers to constitute a complete sof
38、tware acceleration solution. There are several well-known software acceleration frameworks; one is Data Plane Development Kit (DPDK). DPDKworks hand in hand with an underlying Linux operating system to “revector“ network traffic outside of the Linux kernel and into user space processes where the tra
39、ffic can be handled with reduced system overhead. When deployed appropriately into a virtual switch, this capability enables performance improvements over a native (un-accelerated) virtual switch. Additional improvements can be seen when elements of this open framework are implemented and deployed w
40、ithin a suitable VNF. Together, the combined acceleration results can be greater than either alone. Another acceleration framework example is OpenDataPlane (ODP) from the Linaro Networking Group. ODPis an open source project which provides an application programming environment for data plane applic
41、ations. ODPoffers high performance and portability across networking Systems on Chip solutions (SoCs) of various instruction sets and architectures. The environment consists of common APIs, configuration files, services, and utilities on top of an implementation optimized for the underlying hardware
42、. ODPcleanly separates its API from the underlying hardware architecture, and is designed to support implementations ranging from pure software to those that deeply exploit underlying hardware co-processing and acceleration features present in most modern networking “Systems on Chip“ (SoCs) solution
43、s. Software accelerators are components which are typically (though not necessarily exclusively) built against corresponding software acceleration frameworks such as DPDKand ODP. Examples of such accelerators are Poll Mode Drivers that would utilize DPDKfast path, or similar fast path mechanism buil
44、t with ODPAPIs. When dealing with the concept of acceleration abstraction layer (AAL) with regard to software acceleration, it should be noted that AAL provides a common abstraction to a set of variant software accelerators, not a set of different software acceleration frameworks. ETSI ETSI GS NFV-I
45、FA 001 V1.1.1 (2015-12) 9 4.4 Heterogeneous Acceleration 4.4.1 General Heterogeneous accelerators are another class of accelerated functions called from the VNFC (and differentiated from hardware accelerators described in clause 7.2.3 of ETSI GS NFV-INF 003 i.2). It refers to functions implemented w
46、ithin the compute node on the NIC, CPU Complex, accelerator blades / chassis, a plug-in card or an attached device such as FPGA, ASIC, NPU, and called from the VNFC, possibly on a fine granularity. Heterogeneous acceleration techniques may be independent of, or may rely on the CPU Complex and NIC ha
47、rdware features. Software may make use of techniques such as huge page memory, ring buffers and poll-mode drivers. Implementation of heterogeneous accelerators may vary from vendor to vendor. 4.4.2 Coherent acceleration 4.4.2.1 Nature Coherent hardware acceleration denotes a special execution contex
48、t of acceleration where the accelerator and the CPU are closely coupled so that general memory (physical addressable or VNF virtual private address space) can be addressed directly from the accelerator. Coherent accelerator access can be done through new instructions available in the processor or th
49、rough a special controlling interface in the processor. The execution of accelerated function in the hardware may be synchronous or asynchronous to the CPU program. When asynchronous, the CPU or the controlling interface provides mechanisms for either notification (via interrupts or other mechanisms) or polling for the completion of the instruction execution. The acceleration hardware may be on the same chip as the processor or elsewhere, connected through standard interfaces or private interfaces. 4.4.2.2 Runtime definable acceleration Some acceleration har