Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp Lawrence Kreeger, Dinesh Dutt, Thomas Narten, David Black, Murari.

Similar presentations


Presentation on theme: "Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp Lawrence Kreeger, Dinesh Dutt, Thomas Narten, David Black, Murari."— Presentation transcript:

1 Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp Lawrence Kreeger, Dinesh Dutt, Thomas Narten, David Black, Murari Sridharan

2 Purpose Outline the high level requirements for control protocols needed for overlay virtual networks in highly virtualized data centers.

3 Basic Reference Diagram Underlying Network (UN) NVE Virtual Networks (VNs) (aka Overlay Networks) Network Virtualization Edge (NVE) Tenant End System (TES) TES TES “Inner” Addresses NVE UN “Outer” Addresses Payloa d VN VN Context Payloa d VN

4 Possible NVE / TES Scenarios Virtual Switch VM 1 VM 2 Hypervisor Virtual Switch VM 3 VM 4 Hypervisor NVE Access Switch VLAN Trunk Server 1 Physical Servers NVE Access Switch Server 1 Locally Significant Service 3 Network Services Appliance NVE Access Switch VLAN Trunk Locally Significant Service 4 Service 1 Network Services Appliance Service 2 NVE Underlying Network End Device

5 Dynamic State Information Needed by an NVE Tenant End System (TES) inner address (scoped by Virtual Network (VN)) to outer (Underlying Network (UN)) address of the other Network Virtualization Edge (NVE) used to reach the TES inner address. For each VN active on an NVE, a list of UN multicast addresses and/or unicast addresses used to send VN broadcast/multicast packets to other NVEs forwarding to TES for the VN. Either a global VN Context per VN (Virtual Network ID - VNID), or locally significant VN Context to use in packets sent across the UN. If the TES is not within the same device as the NVE, the NVE needs to know the physical port to reach a given inner address. – If multiple VNs are reachable over the same physical port, some kind of tag (e.g. VLAN tag) is needed to keep the VN traffic separated over the wire.

6 Control Plane Category Reference Diagram Virtual Switch VM 1 VM 2 Hypervisor NVE Service 3 Network Services Appliance NVE Access Switch VLAN Trunk Locally Significant Service 4 “Oracle” “Server to NVE” Control Plane “NVE to oracle” Control Plane Central Entity? Distributed? Underlying Network End Device API to Orchestration System? “NVE to oracle” Control Plane

7 Main Categories of Control Protocols For NVE: 1.For an NVE to obtain dynamic state for communicating with a TES located on an End Device (e.g. hypervisor or Network Services Appliance). [server to NVE] 2.For an NVE to obtain dynamic state for communicating across the Underlying Network to other NVEs. [NVE to oracle] For “oracle”: 3.Within components of a distributed “oracle” [Intra- oracle] 4.API to Orchestration System [oracle to orchestration]

8 “NVE to oracle” Control Protocol Possibilities “oracle” is populated by Push from NVE (if not populated by a DC orchestration system) NVE mappings populated by Push from central entity NVE mappings are populated by Pull from central entity Peer to Peer exchange between NVEs with no “oracle” “oracle” could be a monolithic system or a distributed system – If Distributed, a control protocol may be needed between the oracle’s elements (Intra-oracle)

9 Possible Example CP Scenario Virtual Switch Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 NVE State This example is not part of the Req draft and is shown for illustrative purposes Assumes: Central entity with push/pull from NVE, Multicast Enabled IP Underlay “Server to NVE” “NVE to oracle”

10 VM 1 comes up on Hypervisor H1, connected the VN “Red” Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 MAC=M1 NVE State H1’s Virtual Switch signals to A1 that it needs attachment to VN “Red” Attach: VN = “Red” Req VN-ID and Mcast Group for VN = “Red” VN-ID = 10000 Mcast = 224.1.2.3 Local VLAN Tag = 100 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 IGMP Join 224.1.2.3 “Server to NVE” “NVE to oracle”

11 VM 1 comes up on Hypervisor H1, connected the VN “Red” Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 MAC=M1 NVE State H1’s Virtual Switch signals to A1 that MAC M1 is connected to VN “Red” Attach: MAC = M1 in VN “Red” Register MAC = M1 in VN “Red” reachable at IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 “Server to NVE” “NVE to oracle”

12 VM 2 comes up on Hypervisor H1, connected the VN “Red” Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 MAC=M1 NVE State H1’s Virtual Switch signals to A1 that MAC M2 is connected to VN “Red” Attach: MAC = M2 in VN “Red” Register MAC = M2 in VN “Red” reachable at IP-A1 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 VM 2 MAC=M2 “Server to NVE” “NVE to oracle”

13 VM 3 comes up on Hypervisor H2, connected the VN “Red” Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 MAC=M1 NVE State H2’s Virtual Switch signals to A2 that it needs attachment to VN “Red” VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 VM 2 MAC=M2 VM 3 MAC=M3 Attach: VN = “Red” Req VN-ID and Mcast Group for VN = “Red” VN-ID = 10000 Mcast = 224.1.2.3 Local VLAN Tag = 200 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 IGMP Join 224.1.2.3 “Server to NVE” “NVE to oracle”

14 VM 3 comes up on Hypervisor H2, connected the VN “Red” Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 MAC=M1 NVE State H2’s Virtual Switch signals to A2 that MAC M3 is connected to VN “Red” VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 VM 2 MAC=M2 VM 3 MAC=M3 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 MAC = M3 in “Red” on Port 20 Attach: MAC = M3 in VN “Red” Register MAC = M3 in VN “Red” reachable at IP-A2 “Server to NVE” “NVE to oracle”

15 VM 3 ARPs for VM1 Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 NVE State NVE A2 uses multicast to send the ARP Bcast to all NVEs interested in VN “Red” NVE A1 Queries to find inner to outer mapping for MAC M3 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 VM 2 VM 3 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 MAC = M3 in “Red” on Port 20 ARP tagged with VLAN 200 ARP Encapsulated with VN-ID 10000, sent to Group 224.1.2.3 ARP ARP Encapsulated with VN-ID 10000, sent to Group 224.1.2.3 ARP tagged with VLAN 100 ARP Multicast by Underlying Network “Server to NVE” “NVE to oracle”

16 VM 1 Sends ARP Response to VM3 Virtual Switch VM 1 Hypervisor H1 NVE Access Switch A1, NVE IP = IP-A1 Virtual Switch Hypervisor H2 NVE Access Switch A2, NVE IP = IP-A2 Virtual Switch Hypervisor H3 NVE Access Switch A3, NVE IP = IP-A3 Port 10 Port 20 Port 30 NVE State NVE A1 Queries central entity to find inner to outer mapping for MAC M3 NVE A1 Unicasts ARP Response to A2 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 10, Tag=100 MAC = M1 in “Red” on Port 10 MAC = M2 in “Red” on Port 10 MAC = M3 in “Red” on NVE IP-A2 VM 2 VM 3 VN “Red”: VN-ID = 10000 Mcast Group = 224.1.2.3 Port 20, Tag=200 MAC = M3 in “Red” on Port 20 ARP Resp tagged with VLAN 200 ARP Resp Encapsulated with VN-ID 10000, sent to IP-A2 ARP Resp ARP Resp Encapsulated with VN-ID 10000, sent to IP-A2 ARP Resp tagged with VLAN 100 ARP Resp Unicast by Underlying Network Query for outer address for MAC M3 in “Red” Response: Use IP-A2 “Server to NVE” “NVE to oracle”

17 Summary of CP Characteristics Lightweight for NVE – This means: Low amount of state (only what is needed at the time) Low on complexity (keep it simple) Low on overhead (don’t drain resources from NVE) Highly Scalable (don’t collapse when scaled) Extensible Support multiple address families (e.g. IPv4 and IPv6) Allow addition of new address families Quickly reactive to change Support Live Migration of VMs

18 Summary Two categories of NVE control protocols are needed to support a dynamic virtualized data center to dynamically build the state needed by an NVE to perform its map+encap and decap+deliver function. – Server to NVE – NVE to “oracle” Two categories of “oracle” control protocols may be needed. – Intra-oracle – Orchestration system to “oracle” Draft-kreeger-nvo3-overlay-cp-req covers both “server to NVE” and “NVE to oracle”, but not that later two.

19 Proposals Move figures showing co-located TES and NVE and physically separate NVE from draft-kreeger-nvo3-cp- req to Framework document. Separate the two NVE control planes requirements (NVE-to-oracle and server-to-NVE) covered by draft- kreeger-nvo3-cp-req into different documents. Work with other server-to-NVE draft authors to create a candidate WG server-to-NVE control plane requirements draft. Move the split out NVE-to-oracle control plane reqs document to WG draft. Agree on a better name than “oracle”. Perhaps “Mapping System”. Change “Server-to-NVE” to “End Device to NVE”.


Download ppt "Network Virtualization Overlay Control Protocol Requirements draft-kreeger-nvo3-overlay-cp Lawrence Kreeger, Dinesh Dutt, Thomas Narten, David Black, Murari."

Similar presentations


Ads by Google