VSE: Virtual Switch Extension for Adaptive CPU Core Assignment in softirq Shin Muramatsu, Ryota Kawashima Shoichi Saito, Hiroshi Matsuo Nagoya Institute.

Slides:



Advertisements
Similar presentations
And many others…. Deliver networking as part of pooled, automated infrastructure Ensure multitenant isolation, scale and performance Expand.
Advertisements

Virtual Switching Without a Hypervisor for a More Secure Cloud Xin Jin Princeton University Joint work with Eric Keller(UPenn) and Jennifer Rexford(Princeton)
Virtual Machine Queue Architecture Review Ali Dabagh Architect Windows Core Networking Don Stanwyck Sr. Program Manager NDIS Virtualization.
Logically Centralized Control Class 2. Types of Networks ISP Networks – Entity only owns the switches – Throughput: 100GB-10TB – Heterogeneous devices:
VCRIB: Virtual Cloud Rule Information Base Masoud Moshref, Minlan Yu, Abhishek Sharma, Ramesh Govindan HotCloud 2012.
CloudWatcher: Network Security Monitoring Using OpenFlow in Dynamic Cloud Networks or: How to Provide Security Monitoring as a Service in Clouds? Seungwon.
Connect communicate collaborate GN3plus What the network should do for clouds? Christos Argyropoulos National Technical University of Athens (NTUA) Institute.
DOT – Distributed OpenFlow Testbed
NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi.
Outlines Backgrounds Goals Implementation Performance Evaluation
OpenFlow-Based Server Load Balancing GoneWild
Improving performance of overlay-based virtual networks
Alan Shieh Cornell University Srikanth Kandula Microsoft Research Emin Gün Sirer Cornell University Sidecar: Building Programmable Datacenter Networks.
Scalable and Crash-Tolerant Load Balancing based on Switch Migration
IOFlow: A Software-defined Storage Architecture Eno Thereska, Hitesh Ballani, Greg O’Shea, Thomas Karagiannis, Antony Rowstron, Tom Talpey, Richard Black,
Router Architectures An overview of router architectures.
Didier Van Hoye Technical FGIA MVP – Virtual Machine Microsoft Extended Experts Team
Microsoft Virtual Academy Module 4 Creating and Configuring Virtual Machine Networks.
Router Architectures An overview of router architectures.
Enable Multi Tenant Clouds Network Virtualization. Dynamic VM Placement. Secure Isolation. … High Scale & Low Cost Datacenters Leverage Hardware. High.
Building a massively scalable serverless VPN using Any Source Multicast Athanasios Douitsis Dimitrios Kalogeras National Technical University of Athens.
Data Center Network Redesign using SDN
Cellular Core Network Architecture
OpenFlow-Based Server Load Balancing GoneWild Author : Richard Wang, Dana Butnariu, Jennifer Rexford Publisher : Hot-ICE'11 Proceedings of the 11th USENIX.
(1) Univ. of Rome Tor Vergata, (2) Consortium GARR, (3) CREATE-NET
Software-Defined Networks Jennifer Rexford Princeton University.
Detail: Reducing the Flow Completion Time Tail in Datacenter Networks SIGCOMM PIGGY.
Virtualization Infrastructure Administration Network Jakub Yaghob.
Firewall and Internet Access Mechanism that control (1)Internet access, (2)Handle the problem of screening a particular network or an organization from.
 Configuring a vSwitch Cloud Computing (ISM) [NETW1009]
Presented by Xiaoyu Qin Virtualized Access Control & Firewall Virtualization.
NetCloud 2013 Non-Tunneling Edge-Overlay Model using OpenFlow for Cloud Datacenter Networks Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi.
A Virtual Honeypot Framework Author: Niels Provos Published in: CITI Report 03-1 Presenter: Tao Li.
Processor or Socket NUMA Node Core LP Processor or Socket NUMA Node Core LP Processor or Socket NUMA Node Core LP Processor or Socket NUMA Node Core.
Cloud Scale Performance & Diagnosability Comprehensive SDN Core Infrastructure Enhancements vRSS Remote Live Monitoring NIC Teaming Hyper-V Network.
WHO WILL BENEFIT FROM THIS TALK Hardware vendors planning support for Windows Server 8 OEMs planning to source networking equipment for systems running.
Windows Server 2012 Hyper-V Networking
Vic Liu Bob Mandeville Brooks Hickman Weiguo Hao Zu Qiang Speaker: Vic Liu China Mobile Problem Statement for VxLAN Performance Test draft-liu-nvo3-ps-vxlan-perfomance-00.
Project Name Program Name Project Scope Title Project Code and Name Insert Project Branding Image Here.
Network Virtualization in Multi-tenant Datacenters Author: VMware, UC Berkeley and ICSI Publisher: 11th USENIX Symposium on Networked Systems Design and.
The Goals Proposal Realizing broadcast/multicast in virtual networks
Introduction to Mininet, Open vSwitch, and POX
ECE 526 – Network Processing Systems Design Network Address Translator.
Slide 1 2/22/2016 Policy-Based Management With SNMP SNMPCONF Working Group - Interim Meeting May 2000 Jon Saperia.
Introduction Contain two or more CPU share common memory and peripherals. Provide greater system throughput. Multiple processor executing simultaneous.
Tunnel-based mechanisms for datacenter latency control Xinpeng Wei.
XRBLOCK IETF 85 Atlanta Network Virtualization Architecture Design and Control Plane Requirements draft-fw-nvo3-server2vcenter-01 draft-wu-nvo3-nve2nve.
Software Defined Networking and OpenFlow Geddings Barrineau Ryan Izard.
Level 300 Windows Server 2012 Networking Marin Franković, Visoko učilište Algebra.
T3: TCP-based High-Performance and Congestion-aware Tunneling Protocol for Cloud Networking Satoshi Ogawa† Kazuki Yamazaki† Ryota Kawashima† Hiroshi Matsuo†
Atrium Router Project Proposal Subhas Mondal, Manoj Nair, Subhash Singh.
AVS Brazos : IPv6. Agenda AVS IPv6 background Packet flows TSO/TCO Configuration Demo Troubleshooting tips Appendix.
Considerations for Benchmarking Virtual Networks Samuel Kommu, Jacob Rapp, Ben Basler,
Network Virtualization Ben Pfaff Nicira Networks, Inc.
InterVLAN Routing 1. InterVLAN Routing 2. Multilayer Switching.
Network Load Balancing Addressing
Ready-to-Deploy Service Function Chaining for Mobile Networks
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
Affinity Depending on the application and client requirements of your Network Load Balancing cluster, you can be required to select an Affinity setting.
Written by : Thomas Ristenpart, Eran Tromer, Hovav Shacham,
Multi-PCIe socket network device
Anna Giannakou Christine Morin, Jean-Louis Pazat, Louis Rilling
NTHU CS5421 Cloud Computing
Open vSwitch HW offload over DPDK
All or Nothing The Challenge of Hardware Offload
Top #1 in China Top #3 in the world
NetCloud Hong Kong 2017/12/11 NetCloud Hong Kong 2017/12/11 PA-Flow:
Your Programmable NIC Should Be a Programmable Switch
MICROSOFT NETWORK VIRTUALIZATION
Elmo Muhammad Shahbaz Lalith Suresh, Jennifer Rexford, Nick Feamster,
Presentation transcript:

VSE: Virtual Switch Extension for Adaptive CPU Core Assignment in softirq Shin Muramatsu, Ryota Kawashima Shoichi Saito, Hiroshi Matsuo Nagoya Institute of Technology, Japan

Background The spread of public cloud datacenters(DCs) The spread of public cloud datacenters(DCs) Multi tenancy is supported in many DCsMulti tenancy is supported in many DCs  Multiple tenants’ VMs run on the same physical server  An overlay protocol enables network virtualization 1

Overlay-based Network Virtualization 2 VM tenant A tenant B VM tenant A tenant B Encapsulates IP tunnel Traditional Datacenter Network Decapsulates Virtual Switch Virtual Switch Tunnel header Physical Server

Problems in Receiver Physical Servers 3 NIC core 1core 2core 3core 4 Protocol Stack VXLAN vSwitch VM1 VM2 Driver HWIRQ SWIRQ Physical Server Load distribution is required Packet Processing

Receive Side Scaling(RSS) 4 core 1core 2core 3core 4 Protocol Stack VXLAN vSwitch VM1 VM2 Queue 1Queue 2Queue 3Queue 4 Dispatcher RSS enabled NIC Protocol Stack VXLAN vSwitch Flow collision Protocol Stack VXLAN vSwitch Driver The queue number is determined by hashed values calculated from packet headers Driver Flow/VM collision

Tunneling protocol : VXLAN + AES (heavy-cost processing) Performance Impact of Two types of Collision 5 Packet processing load was concentrated on a particular core The core was used for packet processing instead of the VM 1. Two flows were handled on the same core 2. A flow and a VM were handled by the same core

Problems in Existing Models core is deterministically selected for HWIRQcore is deterministically selected for HWIRQ 6 Heavy flows can be processed on a particular coreHeavy flows can be processed on a particular core Heavy flows and VMs can be processed on the same coreHeavy flows and VMs can be processed on the same core Performance decreases

Proposed Model (Virtual Switch Extension) Software component for packet processing on the network driverSoftware component for packet processing on the network driver  VSE determines CPU core for SWIRQ ›Current core load is considered ›Appropriate core selection  VSE has an OpenFlow-based flow table ›Controllers can manage how to handle flows ›Priority flows are processed on low-loaded cores  Vendor-specific functionality is not used ›Vendor lock-in problem can be avoided 7 VSE VM tenant A tenant B Network Driver OF-based flow table

Architectural Overview of VSE 8 VSE VM tenant A tenant B Traditional Datacenter Network Network Driver VSE VM tenant A tenant B Network Driver Flow Table Core Table Controller MatchActions L2-L4 headersSWIRQ... NumberLoadVM_ID #1... Inserts flow entries Relays

How VSE Works 9 core 1core 2core 3core 4 Protocol Stack VXLAN vSwitch VM1 Queue 1Queue 2Queue 3Queue 4 Driver Hash Function RSS-NIC MatchActions VM1’s flowSWIRQ:... NumberLoadVM_ID #170- #2101 #3- # Protocol Stack VXLAN vSwitch VM’s flows can be matched VSE Flow Table Core Table VM1 is running

Implementation Exploiting Receive Packet Steering (RPS)Exploiting Receive Packet Steering (RPS)  VSE ›determines the core for SWIRQ by using Flow and Core tables ›notifies the determined core number to RPS  RPS ›executes SWIRQ to the notified core 10 VSE MatchAction VM1’s flowSWIRQ:4... RPS Protocol Stack core:4 SWIRQ:4

Performance Evaluation GbE network Physical Server2 Virtual Switch Iperf Client VM1 … Physical Server1 Virtual Switch Iperf Client VM2 The network environmentThe network environment Physical Server1Physical Server2VM OSCentOS 6.5(2.6.32)ubuntu-server CPUCore i5 (4cores)Core i7 (4cores)1core Memory16Gbytes2Gbytes Virtual SwitchOpenvSwitch Network40GBASE-SR4- NICMellanoxConnetX(R)-3virtio-net MTU1500bytes Iperf Server VM1 core2 core1 Machine SpecificationsMachine Specifications Iperf Server VM2 VXLAN + AES UDP communication

Evaluation Details Evaluation modelsEvaluation models 12 ModelsFunctionHWIRQ target defaultRSS: off, VSE: offcore4 rssRSS: on, VSE: offcore1~4 vseRSS: on, VSE: oncore1~4 Iperf clientsIperf clients ProtocolUDP (Tunneling : VXLAN + AES) Packet sizes64/1400/8192bytes Total evaluation time20 minutes The number of flow generations20 times Flow duration time1 minute

Results : Total Throughput of Two VMs 13 All fragmented packets were handled on a single core VSE distributed the packet processing load properly VSE can appropriately distribute packet processing load All fragmented packets were handled on a single core

Conclusion and Future Work ConclusionConclusion  We proposed VSE that distributes received packet processing load  Throughput can be improved using VSE Future workFuture work  implements the protocol between a controller and VSE  adaptively changes SWIRQ target based on current core load 14