Accelerating the Path to the Guest Maryam Tahhan and Kevin Traynor Intel
Legal Disclaimers Intel processor numbers are not a measure of performance. Processor numbers differentiate features within each processor family, not across different processor families: Go to: Learn About Intel® Processor Numbers Intel, the Intel logo and Xeon are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others. Copyright © 2014 Intel Corporation. All rights reserved Intel Confidential
Agenda NFV Guest access methods Summary Q&A
Network Function Virtualization (NFV) By 2017 mobile traffic will have grown 13x in the space of 5 years.* In 2017 there will be 3x more connected devices than people on earth.* Service Providers are moving to virtualize the functionality of network components in an effort to move away from custom ASICs, and operate on standard servers. The network functions running on a guest require near native performance. * http://www.intel.com/content/www/us/en/communications/internet-minute-infographic.html
Legacy virtio-net Guest virtio-net is a para-virtualized network driver based on virtio. A guest with a virtio_net driver, shares a number of virtqueues with QEMU. The mechanism by which traffic is passed is comprised of two parts: The datapath. The notification path. Operating System Virtio Driver QEMU TX RX 1 2 Kernel Space OVS Datapath Tap KVM 1 Eth X 2
Intel® Data Plane Development Kit and ivshmem Intel® DPDK ivshmem Physically contiguous memory 1GB pages /dev/hugepages/rte_map0 Hugepages Lockless Efficient for IPC Rx/Tx pairs Rings aka Nahanni* QEMU* 1.4.0 Host Initiated ivshmem Command line hugepage location ivshmem device QEMU Patch
QEMU Operating System OVS client ivshmem Shared Memory Memory 1GB DPDK Ring API ivshmem PCI dev (04:00.0) BAR2 1GB Shared Memory Memory RX TX mempool OVS Datapath DPDK Ring API DPDK PMD Kernel Space
Intel® DPDK rings and ivshmem Characteristics Current Future Zero copy Fast performance Performance Guests can access host memory Unsuitable for untrusted guests Security Host initiated sharing Shared at guest start up Live Migration DPDK Guest application Compatibility Upstream Patch Maintenance QEMU Regions of memory Security groups Security Modifications needed Difficult Live migration
VhostNet us-vhost QEMU QEMU Operating System Operating System Virtio Driver Virtio Driver RX TX 1 OVS Datapath DPDK vhost RX TX IOCTL 1 2 DPDK x Kernel Space ioeventfd KVM vhost-net Kernel Space CUSE KVM irqfd 2 OVS Datapath Tap eventfd link ioeventfd Eth X irqfd
us-vhost Characteristics Current Future Less copies and context switches. Performance Virtqueues mapped to vswitchd address space. Security Solution exists. Live Migration DPDK guest application Virtio-net Compatibility zero copy Merge-able buffers Performance virtio-net backend enhancements Features Library provided by DPDK us-vhost Library vhost-user QEMU
Accelerated performance Use Case Comparison Use Case 1 Highest performance Trusted Guests DPDK VNF No live migration Use Case 2 Accelerated performance Untrusted Guests DPDK & Virtio-net VNFs Live migration dpdkr & ivshmem us-vhost
Summary NFV requires high bandwidth, low latency interfaces into the Network Function Virtualisation Infrastructure 2 accelerated paths to the guest recently enabled in netdev-dpdk Trade off between performance, security, live migration and compatibility DPDK has an active community supporting it
Q & A