Fd.io DPDK vSwitch mini-summit

Slides:



Advertisements
Similar presentations
Neutron What’s new in Havana? Arvind Somya Software Engineer Cisco Systems Inc.
Advertisements

DOT – Distributed OpenFlow Testbed
Transitioning to IPv6 April 15,2005 Presented By: Richard Moore PBS Enterprise Technology.
Introduction into VXLAN Russian IPv6 day June 6 th, 2012 Frank Laforsch Systems Engineer, EMEA
OpenFlow overview Joint Techs Baton Rouge. Classic Ethernet Originally a true broadcast medium Each end-system network interface card (NIC) received every.
Slide title 70 pt CAPITALS Slide subtitle minimum 30 pt Vpn service Ericsson.
Jennifer Rexford Princeton University MW 11:00am-12:20pm SDN Software Stack COS 597E: Software Defined Networking.
Building a massively scalable serverless VPN using Any Source Multicast Athanasios Douitsis Dimitrios Kalogeras National Technical University of Athens.
Hosting Virtual Networks on Commodity Hardware VINI Summer Camp.
1 IP Forwarding Relates to Lab 3. Covers the principles of end-to-end datagram delivery in IP networks.
© 2010 IBM Corporation Plugging the Hypervisor Abstraction Leaks Caused by Virtual Networking Alex Landau, David Hadas, Muli Ben-Yehuda IBM Research –
Extending OVN Forwarding Pipeline Topology-based Service Injection
Network Virtualization in Multi-tenant Datacenters Author: VMware, UC Berkeley and ICSI Publisher: 11th USENIX Symposium on Networked Systems Design and.
Introduction to Mininet, Open vSwitch, and POX
Fd.io is the future Ed Warnicke fd.io Foundation1.
Dave Ward Faster Dave Ward fd.io Foundation.
Why Fabric? 1 Complicated technology/vendor/device specific provisioning for networks, especially heterogeneous network DC Network – STP, TRILL, SPB, VXLAN,
Fd.io Intro Ed Warnicke fd.io Foundation1. Evolution of Programmable Networking Many industries are transitioning to a more dynamic model to deliver network.
Software Defined Networking and OpenFlow Geddings Barrineau Ryan Izard.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Grant.
Atrium Router Project Proposal Subhas Mondal, Manoj Nair, Subhash Singh.
Fd.io Intro Ed Warnicke fd.io Foundation.
Fd.io Intro Ed Warnicke fd.io Foundation1. Evolution of Programmable Networking Many industries are transitioning to a more dynamic model to deliver network.
An open source user space fast path TCP/IP stack and more…
Fd.io Intro Ed Warnicke fd.io Foundation.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
SDN Controller/ Orchestration/ FastDataStacks Joel Halpern (Ericsson) Frank Brockners (Cisco)
EVPN: Or how I learned to stop worrying and love the BGP Tom Dwyer, JNCIE-ENT #424 Clay Haynes, JNCIE-SEC # 69 JNCIE-ENT # 492.
Honeycomb + fd.io Ed Warnicke. Fast Data Scope Fast Data Scope: IO Hardware/vHardware cores/threads Processing Classify Transform Prioritize Forward Terminate.
InterVLAN Routing 1. InterVLAN Routing 2. Multilayer Switching.
Only Use FD.io VPP to Achieve high performance service function chaining Yi Intel.
Co-located Summit
Shaopeng, Ho Architect of Chinac Group
/csit CSIT Readout to FD.io Board 08 February 2017
Ready-to-Deploy Service Function Chaining for Mobile Networks
New Approach to OVS Datapath Performance
/csit CSIT Readout to FD.io Board 09 February 2017
Overlay Network Engine (ONE)
OpenStack’s networking-vpp
BESS: A Virtual Switch Tailored for NFV
Determining Topology from a Capture File
Programmable Overlays with VPP
VPP overview Shwetha Bhandari
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
ODL SFC, Implementing IETF SFC November 14, 2016
6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle:
Fd.io Intro Ed Warnicke fd.io Foundation.
Network Data Plane Part 2
Chapter 4 Data Link Layer Switching
1.
Chapter 6: Network Layer
Virtual LANs.
SDN Overview for UCAR IT meeting 19-March-2014
NSH_SFC Performance Report FD.io NSH_SFC and CSIT Team
Networking overview Sujata
Indigo Doyoung Lee Dept. of CSE, POSTECH
Network Virtualization
NTHU CS5421 Cloud Computing
Dynamic Packet-filtering in High-speed Networks Using NetFPGAs
Open vSwitch HW offload over DPDK
Sangfor Cloud Security Pool, The First-ever NSH Use Case
Implementing an OpenFlow Switch on the NetFPGA platform
Reprogrammable packet processing pipeline
All or Nothing The Challenge of Hardware Offload
Attilla de Groot | Sr. Systems Engineer, HCIE #3494 | Cumulus Networks
Top #1 in China Top #3 in the world
IP Control Gateway (IPCG)
Internet Protocol version 6 (IPv6)
Tokyo OpenStack® Summit
Openstack Summit November 2017
Presentation transcript:

Fd.io DPDK vSwitch mini-summit Discussion of vSwitches, VSwitch requirements and how this fits into the cloud Christopher Price (Ericsson), Mark D Gray (Intel), Thomas F Herbert (Red Hat)

Chris Price -- Introduction Mark D. Gray vSwitch Requirements Thomas F. Herbert -- OVS and fd.io

Fd.io why another vswitch?

Other vswitches Competition is good, so it’s good we have an alternative to OVS right? http://www.openswitch.net/ http://www.opendataplane.org/ http://www.projectfloodlight.org/indigo/ https://opennetlinux.org https://azure.github.io/SONiC/ …

So… Let’s start another one! But Why? The best in breed vSwitches: Already have fast-path processing Already have advanced classification Already have multithreading Are monolithic purpose built… Have sequential packet pipelines... Are feature constrained...

Comes integrated in opnfv FDS – fast data stacks Delivers an integrated openstack deployment using fd.io Provides an “in context” apples to apples comparison with other solutions Establishes a VNFi platform for advanced VPP application development and deployment Available in the Colorado release this Autumn https://wiki.opnfv.org/display/fds

Other requirements multiqueue vhost Performance vhost reconnect Stateful ACL Tunnelling SFC/NSH OTHERS? Service Assurance Openstack, the cloud and NFV impose a number of requirements on virtual switches used in the NFVI

Other requirements multiqueue vhost Performance vhost reconnect Stateful ACL Tunnelling SFC/NSH OTHERS? Service Assurance Openstack, the cloud and NFV impose a number of requirements on virtual switches used in the NFVI

Service Function Chaining Service Path 1 Client A SF A SF B Server A SFC Client B SF C SF D Server B Service Function Chaining provides the ability to define an ordered list of a network services (e.g. firewalls, NAT, QoS). These service are then "stitched" together in the network to create a service chain.

Service Function Chaining Service Path 2 Client A SF A SF B Server A SFC Client B SF C SF D Server B

Service Function Chaining Service Path 3 Client A SF A SF B Server A SFC Client B SF C SF D Server B

Network Service Header (NSH) Header Format Reduces lookup cost by avoiding costly classifications Requires transport protocol VxLAN-GPE Ethernet May require tunnel termination inside the virtual machine Can provide a mechanism to share of metadata between service functions using “Type 2 metadata “ SF A SF B Inner Frame NSH VxLAN-GPE Inner Frame NSH VxLAN-GPE SFF/Virtual Switch NSH is a header that can be inserted into packets to realise service function paths.

Stateful ACL VM Instance Security groups and security group rules allows administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a port. Security groups require stateful ACLs which allow related flows for allowed flows Security Group direction = ingress protocol = tcp remote ip = 1.2.3.4 remote port = 80 PORT Allow Related Flow protocol = tcp dest IP = 5.5.5.5 dest port = 12345 source IP = 1.2.3.4 source port = 80 Allow Flow protocol = tcp dest IP = 1.2.3.4 dest port = 80 source IP = 5.5.5.5 source port = 12345 Network Stateful classification is required in order to implement security groups

Multiqueue Vhost VM vNic Q0 Q1 Q2 Q3 Rx Tx Rx Tx Rx Tx Rx Tx Improves the throughput into virtual machines by providing way to scale out performance when using vhost ports. Guest vSwitch PMD 0 PMD 1 PMD 2 PMD 3 Host Hardware NIC Q0 Q1 Q2 Q3 Rx Multiqueue vhost is required in order to get better performance into a virtual machine Tx Rx Tx Rx Tx Rx Tx

Vhost reconnect Qemu vNic Guest OS Rx Tx Reconnect vSwitch Unix Domain Socket vHost User Ensures virtual machine network connectivity is maintained even across restarts of the virtual switch. Connectivity between VM and Vswitch must be maintained between reboots of the virtual machine and the vswitch.

Network Service Assurance Qemu Guest Operating System Provide differentiated classes of service for high priority dataflows ensuring end-to-end Quality of Service SLAs are met. One example shown here. Performance monitoring at the network level. Reactive and Proactive responses to faults Application vNic vNic Q vSwitch Classify High Priority Traffic (e.g. voice) Low Priority Traffic (e.g. bulk data transfer) Q

Open vSwitch dataplane FD.IO/VPP Open Flow Protocol matches and actions Fast Path -- Linux Kernel Module or Accelerated Fast Path Exact Match Cache -- Fastest About 8K matches using linear search Misses go to Slower MiniFlow Match -- Slower Misses go to OFP Match Slowest VPP fd.io Graph based vector Processing Engine Extensible via plugins Includes complete IPv4 and IPv6 stack Uses DPDK as packet engine Being extended for vxLAN VTEP DPDK

Open vSwitch control plane differences FD.IO/VPP Management via OFP and OVSDB Management via Netconf/REST ODL Honecomb Agent Based on Open Flow System of matches and tables with actions Supports common packet header fields Actions Change fields with some pushes and pops Generally ODL/Netvirt/OVSDB Open Stack - Neutron VPP Plugin API Restconf ODL HoneyComb Agent NetConf Open Stack Neutron ML2

OVS and FD.IO similarities DPDK based Both are virtualized switch fabrics Implemented in Software on Commodity Processors Deployable in Compute node Host to VM uses vhost-user Container Support is through kernel via TAP

Open vSwitch Architecture DPDK User Space Summit 2015 Dublin

Open vSwitch Architecture DPDK User Space Summit 2015 Dublin

Open vSwitch Architecture DPDK User Space Summit 2015 Dublin

How VPP Works Packet Vector is Input to DPDK Nodes on Graph Pipeline Process Packets According to Registered Dependencies Additional Plugins can be Introduced to Create New Nodes Examples of Plugins NSH and VTEP Parallelism -- determism in throughput and latency.

Backup

What does Fd.io offer? 14+ MPPS, single core Multimillion entry FIBs Source RPF Thousands of VRFs Controlled cross-VRF lookups Multipath – ECMP and Unequal Cost Multiple million Classifiers – Arbitrary N-tuple VLAN Support – Single/Double tag Counters for everything Mandatory Input Checks: TTL expiration header checksum L2 length < IP length ARP resolution/snooping ARP proxy IPv4/IPv6 IPv4 GRE, MPLS-GRE, NSH-GRE, VXLAN IPSEC DHCP client/proxy CG NAT IPv6 Neighbor discovery Router Advertisement DHCPv6 Proxy L2TPv3 Segment Routing MAP/LW46 – IPv4aas iOAM MPLS MPLS-o-Ethernet – Deep label stacks supported L2 VLAN Support Single/ Double tag L2 forwarding with EFP/BridgeDomain concepts VTR – push/pop/Translate (1:1,1:2, 2:1,2:2) Mac Learning – default limit of 50k addresses Bridging – Split-horizon group support/EFP Filtering Proxy Arp Arp termination IRB – BVI Support with RouterMac assignment Flooding Input ACLs Interface cross-connect

Plug-in to enable new HW input Nodes What does fd.io offer? … Packet vector Modularity Plugins == Subprojects Plugins can: Introduce new graph nodes Rearrange packet processing graph Can be built independently of VPP source tree Can be added at runtime (drop into plugin directory) All in user space Enabling: Ability to take advantage of diverse hardware when present Support for multiple processor architectures (x86, ARM, PPC) Few dependencies on the OS (clib) allowing easier ports to other Oses/Env Plug-in to enable new HW input Nodes ethernet-input mpls-ethernet-input ip4input llc-input ip6-input arp-input … ip6-lookup Plug-in to create new nodes Custom-A Custom-B ip6-rewrite-transmit ip6-local

Continuous performance lab Develop Submit Patch Automated Testing Deploy Continuous Functional Testing VM based virtual testbeds (3-node/3-VM topologies) Continuous verification of feature conformance Highly parallel test execution Membership dues can donate VMs …. to the CPL Continuous Benchmark Testing Server based hardware testbeds (3-node/3-server topologies) Continuous integration process with real hardware verification Server models, CPU models, NIC models Platinum Members can donate hardware to the CPL

VPP technology in a nutshell Looks pretty good! NDR rates for 2p10GE, 1 core, L2 NIC-to-NIC [IMIX Gbps] NDR rates for 12 port 10GE, 12 cores, IPv4 [IMIX Gbps] not tested VPP technology in a nutshell VPP data plane throughput not impacted by large FIB size OVSDPDK data plane throughput heavily impacted by FIB size VPP and OVSDPDK tested on Haswell x86 platform with E5- 2698v3 2x16C 2.3GHz (Ubuntu 14.04 trusty)

Not just another vswitch fd.io Charter Create a Platform that enables Data Plane Services that are: Highly performant Modular and extensible Open source Interoperable Multi-Vendor Platform fosters innovation and synergistic interoperability between Data Plane Services Source of Continuous Integration resources for Data Plane services based on the Consortium’s project/subprojects Meet the functionality needs of developers, deployers, datacenter operators VPP is a rapid packet processing development platform for highly performing network applications. Network IO Packet Processing Data Plane Management Agent Bare Metal/VM/Container