/csit CSIT Readout to LF OPNFV Project 01 February 2017

Slides:



Advertisements
Similar presentations
Scalable Multi-Cache Simulation Using GPUs Michael Moeng Sangyeun Cho Rami Melhem University of Pittsburgh.
Advertisements

SLA-Oriented Resource Provisioning for Cloud Computing
System Center 2012 R2 Overview
Keith Wiles DPACC vNF Overview and Proposed methods Keith Wiles – v0.5.
Embedded Transport Acceleration Intel Xeon Processor as a Packet Processing Engine Abhishek Mitra Professor: Dr. Bhuyan.
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
QTIP Version 0.2 4th August 2015.
IETF 90: VNF PERFORMANCE BENCHMARKING METHODOLOGY Contributors: Sarah Muhammad Durrani: Mike Chen:
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical.
Srihari Makineni & Ravi Iyer Communications Technology Lab
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Intel Research & Development ETA: Experience with an IA processor as a Packet Processing Engine HP Labs Computer Systems Colloquium August 2003 Greg Regnier.
Maryam Tahhan Al Morton Benchmarking Virtual Switches in OPNFV draft-vsperf-bmwg-vswitch-opnfv-01.
© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Understanding Virtualization Overhead.
Cisco Consulting Services for Application-Centric Cloud Your Company Needs Fast IT Cisco Application-Centric Cloud Can Help.
Fd.io is the future Ed Warnicke fd.io Foundation1.
Dave Ward Faster Dave Ward fd.io Foundation.
Fd.io Intro Ed Warnicke fd.io Foundation1. Evolution of Programmable Networking Many industries are transitioning to a more dynamic model to deliver network.
CSCI/CMPE 4334 Operating Systems Review: Exam 1 1.
Packet processed storage in a software defined world Ash Young fd.io Foundation1.
Fd.io Intro Ed Warnicke fd.io Foundation.
An open source user space fast path TCP/IP stack and more…
Fd.io Intro Ed Warnicke fd.io Foundation.
Structured Container Delivery Oscar Renalias Accenture Container Lead (NOTE: PASTE IN PORTRAIT AND SEND BEHIND FOREGROUND GRAPHIC FOR CROP)
Network Interface Virtualizaion: Challenges and Solutions Ryan Shea and Jiangchuan Liu, Simon Fraser University IEEE Network Park Sewon.
SDN Controller/ Orchestration/ FastDataStacks Joel Halpern (Ericsson) Frank Brockners (Cisco)
Learnings from the first Plugfest
READ ME FIRST Use this template to create your Partner datasheet for Azure Stack Foundation. The intent is that this document can be saved to PDF and provided.
SDN controllers App Network elements has two components: OpenFlow client, forwarding hardware with flow tables. The SDN controller must implement the network.
Only Use FD.io VPP to Achieve high performance service function chaining Yi Intel.
Shaopeng, Ho Architect of Chinac Group
Dataplane Performance, Capacity, and Benchmarking in OPNFV
Md Baitul Al Sadi, Isaac J. Cushman, Lei Chen, Rami J. Haddad
/csit CSIT Readout to FD.io Board 08 February 2017
Bringing Dynamism to OPNFV
Instructor Materials Chapter 7: Network Evolution
Xin Li, Chen Qian University of Kentucky
New Approach to OVS Datapath Performance
Service Assurance in the Age of Virtualization
/csit CSIT Readout to FD.io Board 09 February 2017
OpenStack’s networking-vpp
BESS: A Virtual Switch Tailored for NFV
5/5/ :05 PM © Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN.
VPP overview Shwetha Bhandari
Towards a single virtualized data path for VPP
Cross Community CI (XCI)
17 Dec 2015 Bryan Sullivan, AT&T
Are You Insured Against Your Noisy Neighbor - A VSPERF Use Case
Chapter 1: Introduction
NSH_SFC Performance Report FD.io NSH_SFC and CSIT Team
Aled Edwards, Anna Fischer, Antonio Lain HP Labs
Networking overview Sujata
HyperLoop: Group-Based NIC Offloading to Accelerate Replicated Transactions in Multi-tenant Storage Systems Daehyeok Kim Amirsaman Memaripour, Anirudh.
Get the best out of VPP and inter-VM communications.
memif - shared memory packet interface for container networking
Woojoong Kim Dept. of CSE, POSTECH
Virtio Keith Wiles July 11, 2016.
Open vSwitch HW offload over DPDK
Virtio/Vhost Status Quo and Near-term Plan
Network Services Benchmarking - NSB
Enabling TSO in OvS-DPDK
All or Nothing The Challenge of Hardware Offload
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
NetCloud Hong Kong 2017/12/11 NetCloud Hong Kong 2017/12/11 PA-Flow:
For Community and TSC Discussion Bin Hu
NetFPGA - an open network development platform
Internet Protocol version 6 (IPv6)
Openstack Summit November 2017
Ampere for the openEDGE
Presentation transcript:

/csit CSIT Readout to LF OPNFV Project 01 February 2017 Maciek Konstantynowicz | Project Lead FD.io CSIT Project

FD.io CSIT Readout Agenda FD.io CSIT Background FD.io CSIT Project Going forward FD.io CSIT Readout Agenda

FD.io Continuous Performance Lab (CPL) Develop Submit Patch Automated Testing Deploy Fully automated testing infrastructure Covers both programmability and data planes Continuous verification of code/feature Functionality and performance Code breakage and performance degradations identified before patch review Review, commit and release resource protected Fully open sourced test framework to be included at launch Slide from Linux Foundation FD.io Pre-Launch Event Presented by DWard on 12th January 2016 In San Jose, CA, US.

CSIT(4) Project Scope Fully automated testing of LF(1) FD.io(2) systems FD.io VPP(3) and related sub-systems (e.g. DPDK(5) Testpmd(6), Honeycomb(7), ...) Functionality, performance, regression and new functions. Functionality tests - against functional specifications VPP data plane, network control plane, management plane. Performance tests - limits discovery and benchmark against established reference VPP data plane incl. non-drop-rate and partial- drop-rate packet throughput and latency. Network control plane, management plane. Test definitions driven by FD.io VPP functionality, interfaces and performance Uni-dimensional tests: data plane, network control plane, management plane. Multi-dimensional tests: Use case driven. Integration with LF VPP test execution environment Performance tests execute in physical compute environment hosted by LF. Functional tests execute in virtualized VM VIRL(9) environment hosted by LF. Integration with LF FD.io CI(8) system including FD.io Gerrit and Jenkins Test auto-execution triggered by VPP verify jobs, periodic CSIT jobs, and other FD.io jobs. (1) LF - Linux Foundation. (2) FD.io - Fast Data I/O - project in LF. (3) VPP - Vector Packet Processing - sub-project in FD.io. (4) CSIT – Continuous System Integration and Testing - sub-project in FD.io. (5) DPDK - Data Plane Development Kit. (6) Testpmd - DPDK example application for baseline DPDK testing. (7) Honeycomb - Model-driven management agent for VPP - project in FD.io. (8) CI - Continuous Integration. (9) VIRL - Virtual Internet Routing Lab, SW platform donated by Cisco..

FD. io Continuous Performance Lab a. k. a FD.io Continuous Performance Lab a.k.a. The CSIT Project (Continuous System Integration and Testing) What it is all about – CSIT aspirations FD.io VPP benchmarking VPP functionality per specifications (RFCs1) VPP performance and efficiency (PPS2, CPP3) Network data plane - throughput Non-Drop Rate, bandwidth, PPS, packet delay Network Control Plane, Management Plane Interactions (memory leaks!) Performance baseline references for HW + SW stack (PPS2, CPP3) Range of deterministic operation for HW + SW stack (SLA4) Provide testing platform and tools to FD.io VPP dev and usr community Automated functional and performance tests Automated telemetry feedback with conformance, performance and efficiency metrics Help to drive good practice and engineering discipline into FD.io dev community Drive innovative optimizations into the source code – verify they work Enable innovative functional, performance and efficiency additions & extensions Make progress faster Prevent unnecessary code “harm” Legend: 1 RFC – Request For Comments – IETF Specs basically 2 PPS – Packets Per Second 3 CPP – Cycles Per Packet (metric of packet processing efficiency) 4 SLA – Service Level Agreement

CSIT Project wiki https://wiki.fd.io/view/CSIT

Network Workloads vs. Compute Workloads They are just a little BIT different They are all about processing packets At 10GE, 64B frames can arrive at 14.88Mfps – that’s 67nsec per frame. With 2GHz CPU core each clock cycle is 0.5nsec – that’s 134 clock cycles per frame. BUT it takes ~70nsec to access memory – not much time to do work if waiting for memory access. Efficiency of dealing with packets within the computer is essential Moving packets: Packets arrive on physical interfaces (NICs) and virtual interfaces (VNFs) - need CPU optimized drivers for both. Drivers and buffer management software must not rely on memory access – see time budget above. Processing packets: Header manipulation, encaps/decaps, lookups, classifiers, counters. Need packet processing optimized for CPU platforms CONCLUSION - Need to pay attention to Computer efficiency for Network workloads Computer efficiency for x86-64 = close to optimal use of x86-64 uarchitectures: core and uncore resources

How does CSIT system see VPP code A black box Test against: functional specifications (RFCs), reference performance (PPS), reference efficiency (CPP), and then also discover NDR and PDR (PPS, CPP). A white box Test the paths thru VPP graph nodes Leverage VPP CPP telemetry Example - testing use cases E.g. VPP IP4-over-IP6 softwire MAP implementation

CSIT - Where We Got So Far … CSIT Latest Report – CSIT rls1701 http://docs.fd.io

CSIT - Where We Want To Go … (1/2) System-level improvements Goal: Easier replicability, consumability, extensibility Model-driven test definitions and analytics More telemetry Know exactly what's happening on machines: collectd Periodic HW system baselinining: pcm, mlc, sys_info scripts CPU uarch: pmu-tools, CPU jitter measurement tools Baseline instruction/cycle efficiency: Skylake packet trace Continue to automate analytics Better exception handling More coding discipline :) More HW for performance tests More HW offload Latest XEONs in 1- and 2-socket servers 100GE NICs SmartNICs Use case focus VM vhost virtual interfaces LXC shared-memory virtual interfaces Network Function chaining

CSIT - Where We Want To Go … (2/2) Providing feedback to directly associated projects FD.io VPP FD.io Honeycomb DPDK Testpmd Making Test-verified Use Recommendations for FD.io Providing feedback to technology stack owners: HW CPU: CPU technologies e.g. Intel HW NICs: HW offload, driver costs, ... OS: kernel related e.g. CFS Virtualization: QEMU, libvirt, ... Orchestration integration: OpenStack, ... Asking technology stack owners for optimizations

Network system performance – reference network device benchmarking specs IETF RFC 2544, https://tools.ietf.org/html/rfc2544 RFC 1242, https://tools.ietf.org/html/rfc1242 opnfv/vsperf – vsperf.ltd draft-vsperf-bmwg-vswitch-opnfv-01 https://tools.ietf.org/html/draft-vsperf-bmwg-vswitch-opnfv-01 Derivatives vnet-sla, http://events.linuxfoundation.org/sites/events/files/slides/OPNFV_VSPERF_v10.pdf

CSIT Latest Report – CSIT rls1701 http://docs.fd.io

Some other slides FD.io CSIT Readout

Evolution of Programmable Networking CLOUD NFV SDN Many industries are transitioning to a more dynamic model to deliver network services The great unsolved problem is how to deliver network services in this more dynamic environment Inordinate attention has been focused on the non-local network control plane (controllers) Necessary, but insufficient There is a giant gap in the capabilities that foster delivery of dynamic Data Plane Services Programmable Data Plane fd.io Foundation

FD.io - Fast Data input/output – for Internet Packets and Services What is it about – continuing the evolution of Computers and Networks: Computers => Networks => Networks of Computers => Internet of Computers Networks in Computers – Requires efficient packet processing in Computers Enabling modular and scalable Internet packet services in the Cloud of Computers – routing, bridging, tunneling and servicing packets in CPUs Making Computers be part of the Network, making Computers become a-bigger-helping-of-Internet Internet Services FD.io: www.fd.io Blog: blogs.cisco.com/sp/a-bigger-helping-of-internet-please

Introducing Vector Packet Processor - VPP VPP is a rapid packet processing development platform for highly performing network applications. It runs on commodity CPUs and leverages DPDK It creates a vector of packet indices and processes them using a directed graph of nodes – resulting in a highly performant solution. Runs as a Linux user-space application Ships as part of both embedded & server products, in volume Active development since 2002 Network IO Packet Processing Data Plane Management Agent Bare Metal/VM/Container

What’s makes it fast – the secret… Is actually not a secret at all – tricks of the trade well known e.g. see DPDK. No-to-Minimal memory traffic per packet. Ahead of packet of course memory needed!  Core writes Rx descriptor in preparation for receiving a packet. NIC reads Rx descriptor to get ctrl flags and buffer address. NIC writes the packet. NIC writes Rx descriptor. Core reads Rx descriptor (polling or irq or coalesced irq). Core reads packet header to determine action. Core performs action on packet header. Core writes packet header (MAC swap, TTL, tunnel, foobar..) Core reads Tx descriptor. Core writes Tx descriptor and writes Tx tail pointer. NIC reads Tx descriptor. NIC reads the packet. NIC writes Tx descriptor. PCIe CPU Cores CPU Socket Memory Controller DDR SDRAM Memory Channels LLC Core operations NIC packet operations NIC descriptor operations 1 rxd txd packet 2 3 4 5 6 8 7 9 10 11 12 13 NICs Most of the hard work done by SW threads in CPU cores and local cache with smart algos and predictive prefetching, IOW shifted forward in time :)

CSIT Platform System Design in a Nutshell DUT1 VPP DUT V DUT2 CSIT 3-Node Topology (implemented in VM and HW environments) Honeycomb DUT1 CSIT 2-Node Topology (planned)

CSIT Vagrant environment https://wiki.fd.io/view/CSIT/Tutorials/Vagrant DUT1 VM DUT2 TG DEV tg_dut1 dut1_dut2 tg_dut2 Host-only network Internet (NAT)

CSIT Vagrant environment https://wiki.fd.io/view/CSIT/Tutorials/Vagrant DUT1 VM DUT2 TG DEV tg_dut1 dut1_dut2 tg_dut2 Host-only network Internet (NAT) vagrant up

Q&A