Presentation is loading. Please wait.

Presentation is loading. Please wait.

Openstack Summit November 2017

Similar presentations


Presentation on theme: "Openstack Summit November 2017"— Presentation transcript:

1 Openstack Summit November 2017
Warp-speed Open vSwitch: Turbo Charge VNFs to 100Gbps in NextGen SDN/NFV Datacenter Openstack Summit November 2017 Anita Tragler Product Manager Networking/NFV Mark Iskra Technical Marketing Ash Bhalgat Sr. Director, Cloud Marketing

2 Agenda Requirements for Next-Gen SDN/NFV Data Centers
Boosting NFV performance Datapath options - OVS, SR-IOV, OVS-DPDK OVS Offload options – Full and Partial Offloads Why Full OVS Offload ?, NIC architecture and Packet Flow Open Source Community Contributions Benchmark Testing Setup Performance Results References

3 Next-Gen Data Center Needs : NFV, 5G and IoT
Scale-up Services Need High Bandwidth Millions of Mobile Flows (voice, video, data) 100 Billion 1-10Gbps virtual connections  High Performance: 32-64 Cores/Socket, PCIe Gen4 DDR4, All Flash Arrays (NVMe) 25G to 100G per Server NIC Port Low Latency (1ms RTT) Optimized Resource Usage (Save CapEx)  Multi-site: Efficient Multi-tenancy w/ SDN Overlay Integrated End-to-End Solution w/ No Vendor Lock-in Which Openstack Network Driver Is Popular? Openstack Survey April 2017

4 Datapath Options Today - 10G Server
DPDK VNF with SR-IOV Single-Root IO Virtualization DPDK VNF with Open vSwitch + DPDK VNF with Open vswitch (kernel datapath) Kernel space User space VF0 VF1 Default for Openstack switching, bonding, overlay, live migration Hardware Dependent to the NIC line rate, no CPU overhead ToR for switching DPDK - Direct IO to NIC or vNIC switching, bonding, overlay

5 OVS Offload – SR-IOV w/ OVS Control Plane
Neutron Nova Openstack Controller Support for offload of includes OVS rule match/classification & action QoS marking, Overlay tunneling (VXLAN, GRE, QinQ) Most usecases need security groups Basic - Firewall (stateless), Filtering Advanced - Connection tracking, NAT Fall back to OVS on host (slow path) [Work In Progress Upstream] VNF VM VNF VM SDN Controller OVS Control ovsdb-server ovs-vswitchd VF VF ovsdb-server User hypervisor KVM ovs-vswitchd Kernel OVS-bridge (kernel datapath) SR-IOV VFs TC/flower offload NIC VF VF PF OVS eswitch 5 NIC NIC

6 OVS Offload - Virtio options
OVS-DPDK Partial Offload QoS, Security groups, Conntrack, overlay Openstack Controller Neutron Nova OVS embedded in NIC No Host OVS Tighter integration testing Openstack Controller VNF VM VNF VM Nova VNF VM VNF VM Neutron SDN Controller VF VF SDN Controller ODL, OVN OVS Control ovsdb-server ovs-vswitchd User vNIC vNIC ovsdb-server OVS-DPDK bridge Partial offload dpdk VF1 VF2 ovs-vswitchd Kernel User NIC OVS Control ovsdb-server ovs-vswitchd ovsdb-server Kernel ovs-vswitchd NIC VF1 VF2 OVS eswitch OVS eswitch NIC NIC NIC NIC

7 ASAP2 Direct: Full OVS Offload
Full OVS Offload: Best of Both Worlds !! ASAP2 Direct: Full OVS Offload Server NICs : 100G is the new 40G, 25G is the new 10G Accelerate OVS data path with standard OVS control plane (Mellanox ASAP2) - In other words, enable support for most SDN controllers with SR-IOV data plane OVS Offload better than OVS DPDK: Up to 10x PPS performance with ZERO CPU consumption (Mellanox Lab Results) SR-IOV VF SR-IOV VF PF

8 Open Source Community Upstream Contributions
Linux Kernel Community Representor port TC (traffic control) and flower offload hooks to NIC Conntrack Offload Openstack Community OVS ML2 driver to bind to new VIF/port type (SR-IOV + OVS) Nova new VIF support Disable OVS in host TripleO Installer OVS Userspace Flow offload via TC or DPDK Policy mechanism Conntrack Offload OVN flow Offload DPDK community Flow offload from DPDK (RTE_Flow) DPDK conntrack offload Currently Available Future

9 Benchmark Testing Configuration
VxLAN Tunnels between back to back hypervisors Server Specs: E GHz Mellanox ConnectX-5 NIC (100Gbps) RHEL 7.4 Network Configuration: Switch: Mellanox SN2100(100Gbps) NIC: Mellanox ConnectX-5 100G MTU:9k (Underlay), 1500 (Overlay) SDN: Nuage Virtualized Cloud Services v5.1u1

10 Line Rate Throughput w/ Zero CPU Utilization !!
OVS ASAP2 Achieves ~ Line Rate (94Gbps) for Large Packets VXLAN Tunnels OVS Virtio Can’t Scale beyond 30Gbps OVS ASAP2 CPU Utilization is ~50% lower than OVS Virtio (can’t compare beyond 30Gbps) OVS ASAP2 CPU Utilization is less than OVS Lower is each data rate Testing methodology: iperfv2 load generation 12 cpu cores dedicated to testing Measure difference in CPU utilization CPU Utilization numbers include iPerf Ping latency with 20Gbps background load OVS Kernel-Virtio ms ASAP ms

11 Highest PPS Performance w/ Zero CPU Utilization !!
OVS ASAP2 Achieves ~60MPPS for Small Packets VXLAN Tunnels CPU Utilization – Entire CPU consumption from test bed only 18% ZERO CPU Utilization for OVS ASAP2 packet processing Flat CPU Consumption Shows: from test bench 0% from OVS ASAP2 Packet Processing Testing methodology: TRex load generation 6 cpu cores dedicated to TestPMD Ping latency with 20Gbps background load Virtio ms ASAP2 Direct OVS Offload ms

12 References Nuage: www.nuagenetworks.net
- Nuage Developer Experience: - Nuage Networks ML2 Community: - Nuage for OSPD: Mellanox: - Using SR-IOV offloads with Open-vSwitch and similar applications: - OVS patch series for all changes including new offloading API: :               - DPDK RTE_Flow API: Redhat: OpenStack blueprints, specs and reviews OVS Offload SmartNIC enablement Openstack blueprint + Spec [Pike] - neutron - nova ​​ - os-vif

13 Thank-you


Download ppt "Openstack Summit November 2017"

Similar presentations


Ads by Google