NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi.

Slides:



Advertisements
Similar presentations
Towards Software Defined Cellular Networks
Advertisements

Copyright © 2004 Juniper Networks, Inc. Proprietary and Confidentialwww.juniper.net 1 E-VPN and Data Center R. Aggarwal
VCRIB: Virtual Cloud Rule Information Base Masoud Moshref, Minlan Yu, Abhishek Sharma, Ramesh Govindan HotCloud 2012.
Connect communicate collaborate GN3plus What the network should do for clouds? Christos Argyropoulos National Technical University of Athens (NTUA) Institute.
Neutron What’s new in Havana? Arvind Somya Software Engineer Cisco Systems Inc.
Copyright © 2014, Oracle and/or its affiliates. All rights reserved. | Oracle’s Next-Generation SDN Platform Andrew Thomas Architect Corporate Architecture.
DOT – Distributed OpenFlow Testbed
Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp-00 Lawrence Kreeger, Dinesh Dutt, Thomas Narten, David Black,
Introduction into VXLAN Russian IPv6 day June 6 th, 2012 Frank Laforsch Systems Engineer, EMEA
OpenFlow overview Joint Techs Baton Rouge. Classic Ethernet Originally a true broadcast medium Each end-system network interface card (NIC) received every.
PARIS: ProActive Routing In Scalable Data Centers Dushyant Arora, Theophilus Benson, Jennifer Rexford Princeton University.
Switching Topic 4 Inter-VLAN routing. Agenda Routing process Routing VLANs – Traditional model – Router-on-a-stick – Multilayer switches EtherChannel.
Outlines Backgrounds Goals Implementation Performance Evaluation
Improving performance of overlay-based virtual networks
Network Overlay Framework Draft-lasserre-nvo3-framework-01.
Scalable and Crash-Tolerant Load Balancing based on Switch Migration
VLANs (Virtual LANs) CS 158B Elaine Lim Allison Nham.
NVO3 Requirements for Tunneling Igor Gashinsky and Bruce Davie IETF.
Microsoft Virtual Academy Module 4 Creating and Configuring Virtual Machine Networks.
MPLS And The Data Center Adrian Farrel Old Dog Consulting / Juniper Networks
Authors: Vic Liu, Chen Li China Mobile Speaker: Vic Liu China Mobile NaaS (Network as a service) Requirement draft-liu-nvo3-naas-requirement-00.
Virtual LANs. VLAN introduction VLANs logically segment switched networks based on the functions, project teams, or applications of the organization regardless.
Data Center Network Redesign using SDN
The Network Layer. Network Projects Must utilize sockets programming –Client and Server –Any platform Please submit one page proposal Can work individually.
Virtual LAN Design Switches also have enabled the creation of Virtual LANs (VLANs). VLANs provide greater opportunities to manage the flow of traffic on.
Virtualization Infrastructure Administration Network Jakub Yaghob.
© 2006 Cisco Systems, Inc. All rights reserved.Cisco PublicITE I Chapter 6 1 Connecting to the Network Networking for Home and Small Businesses – Chapter.
NetCloud 2013 Non-Tunneling Edge-Overlay Model using OpenFlow for Cloud Datacenter Networks Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi.
Draft-bitar-nvo3-vpn-applicability-00.txt Page - 1 Cloud Networking: Framework and VPN Applicability draft-bitar-nvo3-vpn-applicability-00.txt Nabil Bitar.
Chapter 8: Virtual LAN (VLAN)
Review: –Ethernet What is the MAC protocol in Ethernet? –CSMA/CD –Binary exponential backoff Is there any relationship between the minimum frame size and.
© 2007 Cisco Systems, Inc. All rights reserved.Cisco Public ITE PC v4.0 Chapter 1 1 Connecting to the Network Networking for Home and Small Businesses.
1 © OneCloud and/or its affiliates. All rights reserved. VXLAN Overview Module 4.
Stateless Transport Tunneling draft-davie-stt-01.txt Bruce Davie, Jesse Gross, Igor Gashinsky et al.
1 © 2003, Cisco Systems, Inc. All rights reserved. CCNA 3 v3.0 Module 9 Virtual Trunking Protocol.
Link Aggregation V1.1. Objectives Understand the features and benefits of link aggregation Grasp the basic link aggregation configuration steps of G series.
VLAN Design Etherchannel. Review: Private VLANS  Used by Service providers to deploy host services and network access where all devices reside in the.
SOFTWARE DEFINED NETWORKING/OPENFLOW: A PATH TO PROGRAMMABLE NETWORKS April 23, 2012 © Brocade Communications Systems, Inc.
1 VLANs Relates to Lab 6. Short module on basics of VLAN switching.
The Goals Proposal Realizing broadcast/multicast in virtual networks
VLAN Trunking Protocol
VXLAN Nexus 9000 Essentials for the Data Center Karim Afifi
Introduction to Mininet, Open vSwitch, and POX
Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp Lawrence Kreeger, Dinesh Dutt, Thomas Narten, David Black, Murari.
Presenter : Weerawardhana J.L.M.N. Department of Computer Engineering, University of Peradeniya.
VSE: Virtual Switch Extension for Adaptive CPU Core Assignment in softirq Shin Muramatsu, Ryota Kawashima Shoichi Saito, Hiroshi Matsuo Nagoya Institute.
1 Copyright © 2009 Juniper Networks, Inc. E-VPN for NVO Use of Ethernet Virtual Private Network (E-VPN) as the carrier-grade control plane.
XRBLOCK IETF 85 Atlanta Network Virtualization Architecture Design and Control Plane Requirements draft-fw-nvo3-server2vcenter-01 draft-wu-nvo3-nve2nve.
Why Fabric? 1 Complicated technology/vendor/device specific provisioning for networks, especially heterogeneous network DC Network – STP, TRILL, SPB, VXLAN,
T3: TCP-based High-Performance and Congestion-aware Tunneling Protocol for Cloud Networking Satoshi Ogawa† Kazuki Yamazaki† Ryota Kawashima† Hiroshi Matsuo†
EVPN: Or how I learned to stop worrying and love the BGP
Ananta: Cloud Scale Load Balancing Presenter: Donghwi Kim 1.
AVS Brazos : IPv6. Agenda AVS IPv6 background Packet flows TSO/TCO Configuration Demo Troubleshooting tips Appendix.
Network Virtualization Ben Pfaff Nicira Networks, Inc.
EVPN: Or how I learned to stop worrying and love the BGP Tom Dwyer, JNCIE-ENT #424 Clay Haynes, JNCIE-SEC # 69 JNCIE-ENT # 492.
Heitor Moraes, Marcos Vieira, Italo Cunha, Dorgival Guedes
Scaling the Network: The Internet Protocol
Chapter 4 Data Link Layer Switching
TRILL MPLS-Based Ethernet VPN
Virtual LANs.
The good, the bad and the ugly…
The Stanford Clean Slate Program
Network base Network base.
Software Defined Networking
NTHU CS5421 Cloud Computing
EVPN a very short introduction
Scaling the Network: The Internet Protocol
NetCloud Hong Kong 2017/12/11 NetCloud Hong Kong 2017/12/11 PA-Flow:
Tim Strakh CEO, IEOFIT CCIE RS, CCIE Sec CCIE Voice, CCIE DC
Chapter 4: outline 4.1 Overview of Network layer data plane
Presentation transcript:

NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi Matsuo

Outlines Backgrounds  Ethernet Fabric  Network Virtualization Edge-Overlay (Distributed Tunnels)  Tunneling protocols  Problems Proposed method  MAC address translation  Host-based VLAN Evaluation Conclusion 2

Ethernet Fabric L2-based technology Multipath without STP (Spanning-Tree Protocol) Automatic network management Standardized protocols  TRILL, SPB, … Many products  FabricPath(Cisco), VCS(Brocade), … 3 Scalable L2 datacenter networks

Network Virtualization Multi-tenant Datacenter Networks  Each tenant uses virtual network(s) LINP : Logical Isolated Network Partition  Each virtual network shares the physical network resources Physical network VM Tenant 1 Tenant 2 Tenant 3 4 VM Virtual networks VM

Traditional approach VLAN – Virtual LAN  Each virtual network uses its own VLAN ID 5 DST VLAN Payload FCS VM's frame SRC TYPE Ethernet header VLAN ID (1 ~ 4094) is included VM VID=10 VID=20 VM Physical network Normal routing/switching

VLAN limitations The maximum number of virtual networks is 4094  Each tenant can create multiple virtual networks Too many Forwarding DB (FDB) entries  MAC addresses of VMs have to be learnt Address space isolation is difficult  Different tenants cannot use the same address space 6

A trend – Edge-Overlay approach Distributed tunneling, NVO3... Purposes  Tenant traffic separation  Address space isolation  Scalability of the number of virtual networks Over 4094  Reduction of the number of FDB entries 7

Key technologies Tunneling protocols  L2-in-L3 (IP-based) VXLAN, NVGRE, STT  VN Context Identifier NVE : Network Virtualization Edge  TEP : Tunnel End Point  Devices Virtual switches (e.g. Open vSwitch, Cisco Nexus 1000V) ToR switches Gateways 8

VM Edge-Overlay Overview 9 VM Physical network Virtual network1 Virtual network2 Virtual network3 VM Physical server Virtual switch Tenant 1 Tenant 2 Tenant 3 Tenant 1 Tenant 2 Tenant 3 Virtual switch NVE Tunnel

Tunneling protocols Ethernet (Physical) IP (Physical) VXLANUDP FCS Ethernet (Virtual) Payload VXLAN VM's frame Ethernet (Physical) IP (Physical) NVGRE FCS Ethernet (Virtual) Payload NVGRE VM's frame Ethernet (Physical) IP (Physical) STT TCP-like FCS Ethernet (Virtual) Payload STT VM's frame 24bit ID UDP encapsulation 24bit ID IP encapsulation 64bit ID TCP-like header NIC offloading (TSO) 10

Problems with Tunneling (1 / 2) IP Fragmentation at the physical server Payload Header Payload Header Payload Header VM Physical Server HeaderPayload Header Fragmentation 11

Problems with Tunneling (2 / 2) Compatibility with existing environment  ECMP-based Load balancing is not supported (NVGRE) ECMP : Equal Cost Multi-Path  Firewalls, IDS, load balancer may drop packets (STT)  TSO cannot be used (VXLAN, NVGRE) TSO : TCP Segmentation Offload Practical problem  Supported protocols differs between products (vendor lock-in) 12

Proposed Method Yet another edge-overlay method  Tunneling protocols are not used  L2 physical networks  No IP fragmentation at the physical server layer  OpenFlow-enabled virtual switches  Scalability of the number of virtual networks  Compatibility with existing environment 13

Method1 - MAC Address Translation MAC addresses within the frame are replaced  SRC address : VM1's address => SV1's address  DEST address : VM2's address => SV2's address VM 1 VM 2 VM1 => VM2 Physical Server (SV1)Physical Server (SV2) SV1 => SV2 SV1 => VM2 Virtual Switch 14 VMs’ MAC addresses are hidden

Method2 – Host-based VLAN VM Tenant 1Tenant 2 VID=10 VID=20 Server VM Tenant 1 Tenant 2 VID=20 VID=10 Virtual Network (VID10) Virtual Network (VID20) Traditional VM Tenant 1Tenant 2 VID=10 VID=20 VID=30 Server VM Tenant 1 Tenant 2 VID=20 VID=10 Proposal VID is globally unique VID is unique within a server 15 The number of virtual networks is unlimited

An example VM 1 Virtual switch Sender SRC-IP : DST-IP : SRC-MAC: 52:54:00:11:11:11 DST-MAC: 52:54:00:22:22:22 Physical server (SV1) Tenant A :54:00:11:11: F4:52:14:12:34:56 Traditional Datacenter network VM 2 ① ② ② ② ③ SRC-IP : DST-IP : SRC-MAC: F4:52:14:12:34:56 DST-MAC: 52:54:00:22:22:22 Receiver Tenant A :54:00:22:22:22 MatchAction VIDTenantDest 10AVM2 20BVM4 Physical server (SV2) F4:52:14:AB:CD:EF OpenFlow Controller Virtual switch 16 MatchAction TenantDestServerVID AVM2SV210 AVM5SV330 BVM4SV220 SRC-IP : DST-IP : SRC-MAC: F4:52:14:12:34:56 DST-MAC: F4:52:14:AB:CD:EF VLAN ID: 10

Questions How to ensure the isolation of virtual networks?  The OpenFlow controller knows all information about VMs IP/MAC addresses, tenant, physical server  Virtual switches allow communications between VMs of the same tenant How virtual switches know VLAN ID?  Local VMs When : VM startup (vport is created) How : The controller allocates a VID triggered by port add event  Remote VMs When : First ARP request How : The controller writes a proper flow entry 17

Feature Comparison 18 ProposalVXLANNVGRESTTVLAN Physical NetworkL2L2 / L3 L2 MAC address hiding ✔✔✔✔ - No. of virtual networksUnlimited16 million 18 quintillion4094 IP Multicasting-Option--- Load balancing (ECMP) ✔✔ - ✔✔ FW, IDS, LB Transparency ✔✔✔ - ✔ IP Fragmentation (Physical) -Occur - TSO support ✔ -- ✔✔

Performance Evaluation 3 types of VM communications are evaluated using 40 GbE environment  TCP communication  UDP communication  Multiple TCP communcations 19

Environment 20 Virtual switch Physical server 1 VM1 Iperf client VM2 Physical server 2 40GbE Network (data plane) Virtual switch OpenFlow Controller Iperf server (GRE / VXLAN tunnel) 1GbE Network (control plane) Iperf client VM3 VM4 Iperf server SenderReceiver

TCP communication 21 8Gbps

UDP communication 22 Fragmentation Too many fragments

Multiple TCP communications 23

Conclusion Yet another Edge-overlay method  No tunneling protocols  No IP fragmentation at physical server layer  Higher throughput than tunneling protocols Over 10 Gbps  L2 network Future Work  Inter DC communication support  MPLS support 24