Presentation is loading. Please wait.

Presentation is loading. Please wait.

NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi.

Similar presentations


Presentation on theme: "NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi."— Presentation transcript:

1 NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi Matsuo

2 Outlines Backgrounds  Ethernet Fabric  Network Virtualization Edge-Overlay (Distributed Tunnels)  Tunneling protocols  Problems Proposed method  MAC address translation  Host-based VLAN Evaluation Conclusion 2

3 Ethernet Fabric L2-based technology Multipath without STP (Spanning-Tree Protocol) Automatic network management Standardized protocols  TRILL, SPB, … Many products  FabricPath(Cisco), VCS(Brocade), … 3 Scalable L2 datacenter networks

4 Network Virtualization Multi-tenant Datacenter Networks  Each tenant uses virtual network(s) LINP : Logical Isolated Network Partition  Each virtual network shares the physical network resources Physical network VM Tenant 1 Tenant 2 Tenant 3 4 VM Virtual networks VM

5 Traditional approach VLAN – Virtual LAN  Each virtual network uses its own VLAN ID 5 DST VLAN Payload FCS VM's frame SRC TYPE Ethernet header VLAN ID (1 ~ 4094) is included VM VID=10 VID=20 VM Physical network Normal routing/switching

6 VLAN limitations The maximum number of virtual networks is 4094  Each tenant can create multiple virtual networks Too many Forwarding DB (FDB) entries  MAC addresses of VMs have to be learnt Address space isolation is difficult  Different tenants cannot use the same address space 6

7 A trend – Edge-Overlay approach Distributed tunneling, NVO3... Purposes  Tenant traffic separation  Address space isolation  Scalability of the number of virtual networks Over 4094  Reduction of the number of FDB entries 7

8 Key technologies Tunneling protocols  L2-in-L3 (IP-based) VXLAN, NVGRE, STT  VN Context Identifier NVE : Network Virtualization Edge  TEP : Tunnel End Point  Devices Virtual switches (e.g. Open vSwitch, Cisco Nexus 1000V) ToR switches Gateways 8

9 VM Edge-Overlay Overview 9 VM Physical network Virtual network1 Virtual network2 Virtual network3 VM Physical server Virtual switch Tenant 1 Tenant 2 Tenant 3 Tenant 1 Tenant 2 Tenant 3 Virtual switch NVE Tunnel

10 Tunneling protocols Ethernet (Physical) IP (Physical) VXLANUDP FCS Ethernet (Virtual) Payload VXLAN VM's frame Ethernet (Physical) IP (Physical) NVGRE FCS Ethernet (Virtual) Payload NVGRE VM's frame Ethernet (Physical) IP (Physical) STT TCP-like FCS Ethernet (Virtual) Payload STT VM's frame 24bit ID UDP encapsulation 24bit ID IP encapsulation 64bit ID TCP-like header NIC offloading (TSO) 10

11 Problems with Tunneling (1 / 2) IP Fragmentation at the physical server Payload Header Payload Header Payload Header VM Physical Server HeaderPayload Header Fragmentation 11

12 Problems with Tunneling (2 / 2) Compatibility with existing environment  ECMP-based Load balancing is not supported (NVGRE) ECMP : Equal Cost Multi-Path  Firewalls, IDS, load balancer may drop packets (STT)  TSO cannot be used (VXLAN, NVGRE) TSO : TCP Segmentation Offload Practical problem  Supported protocols differs between products (vendor lock-in) 12

13 Proposed Method Yet another edge-overlay method  Tunneling protocols are not used  L2 physical networks  No IP fragmentation at the physical server layer  OpenFlow-enabled virtual switches  Scalability of the number of virtual networks  Compatibility with existing environment 13

14 Method1 - MAC Address Translation MAC addresses within the frame are replaced  SRC address : VM1's address => SV1's address  DEST address : VM2's address => SV2's address VM 1 VM 2 VM1 => VM2 Physical Server (SV1)Physical Server (SV2) SV1 => SV2 SV1 => VM2 Virtual Switch 14 VMs’ MAC addresses are hidden

15 Method2 – Host-based VLAN VM Tenant 1Tenant 2 VID=10 VID=20 Server VM Tenant 1 Tenant 2 VID=20 VID=10 Virtual Network (VID10) Virtual Network (VID20) Traditional VM Tenant 1Tenant 2 VID=10 VID=20 VID=30 Server VM Tenant 1 Tenant 2 VID=20 VID=10 Proposal VID is globally unique VID is unique within a server 15 The number of virtual networks is unlimited

16 An example VM 1 Virtual switch Sender SRC-IP : 192.168.0.1 DST-IP : 192.168.0.2 SRC-MAC: 52:54:00:11:11:11 DST-MAC: 52:54:00:22:22:22 Physical server (SV1) Tenant A 192.168.0.1 52:54:00:11:11:11 10.0.0.1 F4:52:14:12:34:56 Traditional Datacenter network VM 2 ① ② ② ② ③ SRC-IP : 192.168.0.1 DST-IP : 192.168.0.2 SRC-MAC: F4:52:14:12:34:56 DST-MAC: 52:54:00:22:22:22 Receiver Tenant A 192.168.0.2 52:54:00:22:22:22 MatchAction VIDTenantDest 10AVM2 20BVM4 Physical server (SV2) 10.0.0.2 F4:52:14:AB:CD:EF OpenFlow Controller Virtual switch 16 MatchAction TenantDestServerVID AVM2SV210 AVM5SV330 BVM4SV220 SRC-IP : 192.168.0.1 DST-IP : 192.168.0.2 SRC-MAC: F4:52:14:12:34:56 DST-MAC: F4:52:14:AB:CD:EF VLAN ID: 10

17 Questions How to ensure the isolation of virtual networks?  The OpenFlow controller knows all information about VMs IP/MAC addresses, tenant, physical server  Virtual switches allow communications between VMs of the same tenant How virtual switches know VLAN ID?  Local VMs When : VM startup (vport is created) How : The controller allocates a VID triggered by port add event  Remote VMs When : First ARP request How : The controller writes a proper flow entry 17

18 Feature Comparison 18 ProposalVXLANNVGRESTTVLAN Physical NetworkL2L2 / L3 L2 MAC address hiding ✔✔✔✔ - No. of virtual networksUnlimited16 million 18 quintillion4094 IP Multicasting-Option--- Load balancing (ECMP) ✔✔ - ✔✔ FW, IDS, LB Transparency ✔✔✔ - ✔ IP Fragmentation (Physical) -Occur - TSO support ✔ -- ✔✔

19 Performance Evaluation 3 types of VM communications are evaluated using 40 GbE environment  TCP communication  UDP communication  Multiple TCP communcations 19

20 Environment 20 Virtual switch Physical server 1 VM1 Iperf client VM2 Physical server 2 40GbE Network (data plane) Virtual switch OpenFlow Controller Iperf server (GRE / VXLAN tunnel) 1GbE Network (control plane) Iperf client VM3 VM4 Iperf server SenderReceiver

21 TCP communication 21 8Gbps

22 UDP communication 22 Fragmentation Too many fragments

23 Multiple TCP communications 23

24 Conclusion Yet another Edge-overlay method  No tunneling protocols  No IP fragmentation at physical server layer  Higher throughput than tunneling protocols Over 10 Gbps  L2 network Future Work  Inter DC communication support  MPLS support 24


Download ppt "NCCA 2014 Performance Evaluation of Non-Tunneling Edge-Overlay Model on 40GbE Environment Nagoya Institute of Technology, Japan Ryota Kawashima and Hiroshi."

Similar presentations


Ads by Google