Download presentation
1
VXLAN Nexus 9000 Module 6 – MP-BGP EVPN - Design
@onecloudinc.com
2
VXLAN EVPN Design Options
3
VXLAN Fabric Design – Spine nodes as RR
VXLAN Overlay MP-iBGP EVPN RR RR MP-iBGP Sessions Leaf VTEP VTEP VTEP VTEP VTEP VTEP VTEP Functions are on leaf layer Spine nodes are iBGP route reflector Spine nodes don’t need to be VTEP
4
VXLAN EVPN Fabric with MP-iBGP Design (Cont’d)
Spine switches are not capable of running MP-BGP EVPN. Leaf switches are chosen to provide iBGP route-reflector functions to the other iBGP VTEP leaf nodes. All other leaf nodes peer with them through iBGP. Spine Leaf VTEP iBGP RR VXLAN Overlay
5
VXLAN EVPN Fabric with MP-iBGP Design (Cont’d)
Spine switches don’t need to be able to run MP-BGP EVPN. They are purely IP transport devices. Dedicated MP-BGP EVPN route reflectors provide better scalability and control-plane performance. They can be connected to the fabric network in the same way as a leaf node. RR Spine RR Leaf iBGP iBGP iBGP iBGP iBGP VTEP Cisco Nexus 9300 VTEP Cisco Nexus 9300 VTEP Cisco Nexus 9300 VTEP Cisco Nexus 9300 VTEP Cisco Nexus 9300 All leaf VTEPs run iBGP sessions with the dedicated route reflectors.
6
VXLAN Fabric Design with MP-eBGP EVPN
BGP on Spine needs to have the following in address-family l2vpn evpn: BGP next-hop unchanged retain route-target all AS 65000 Spine MP-eBGP Sessions AS 65001 AS 65002 AS 65003 AS 65004 AS 65005 AS 65006 VTEP VTEP VTEP VTEP VTEP VTEP Leaf VTEP Functions are on leaf layer Spine nodes are MP-eBGP Peers Spine nodes don’t need to be VTEP Need to manually configure Route-targets on each VTEP
7
VXLAN Fabric Design with MP-eBGP EVPN (Cont’d)
BGP on Spine needs to have the following in address-family l2vpn evpn: BGP next-hop unchanged retain route-target all AS 65000 Spine MP-eBGP Sessions AS 65100 AS 65100 AS 65100 AS 65100 AS 65100 AS 65100 Leaf VTEP VTEP VTEP VTEP VTEP VTEP VTEP leafs are in the same BGP AS
8
EVPN VXLAN Fabric Inter Data Center Connectivity (Existing)
Spine Leaf VTEP Spine RR Border Leaf VXLAN Overlay EVPN VRF/VRFs Space RR RR DC #1 EVPN iBGP DC #2 EVPN iBGP Border Leaf VTEP VTEP VTEP VTEP VTEP VTEP Leaf VLAN hand-off Flood-&-Learn OVT/VPLS One EVPN Administrative Domain Stretched Across Two Data Centers
9
EVPN VXLAN Fabric Inter Data Center Connectivity (Option A)
Spine Leaf VTEP Spine RR Border Leaf VXLAN Overlay EVPN VRF/VRFs Space RR RR DC #1 EVPN iBGP DC #2 EVPN iBGP Border Leaf VTEP VTEP VTEP VTEP VTEP VTEP Leaf Inter-DC EVPN eBGP One EVPN Administrative Domain Stretched Across Two Data Centers
10
EVPN VXLAN Fabric Inter Data Center Connectivity (Option A’)
Spine Leaf VTEP Spine RR Border Leaf VXLAN Overlay EVPN VRF/VRFs Space DC #1 EVPN iBGP DC #2 EVPN iBGP Border Leaf RR RR VTEP VTEP VTEP VTEP VTEP VTEP Leaf Inter-DC EVPN eBGP One EVPN Administrative Domain Stretched Across Two Data Centers
11
EVPN VXLAN Fabric Inter Data Center Connectivity (Option B)
IP Routing Spine Spine DC #1 EVPN iBGP DC #2 EVPN iBGP VTEP Border Leaf Border Leaf RR RR VTEP VTEP VTEP VTEP VTEP VTEP VTEP VTEP Leaf VTEP VTEP Leaf Inter-DC EVPN eBGP One EVPN Administrative Domain Stretched Across Two Data Centers Red line is the data path, evpn ebgp session between border leafs Global Default VRF Or User Space VRFs Advantage of This Option: No changes on the existing Spine/Aggregation devices Take advantage of existing inter-DC links and routing Only need to add border leaf VTEPs
12
EVPN VXLAN Fabric Inter Data Center Connectivity (Option B)
Spine Leaf VTEP Spine RR Border Leaf VXLAN Overlay EVPN VRF/VRFs Space RR RR DC #1 iBGP DC #2 iBGP Border Leaf VTEP VTEP VTEP VTEP VTEP VTEP Leaf EVPN Administrative Domain #1 EVPN Administrative Domain #2 VLAN Hand-off VTEP VTEP VTEP VTEP Inter-DC eBGP Inter-DC EVPN Administrative Domain
13
vPC VTEPs in MP-BGP EVPN
Underlay IP Network BGP Peer BGP Peer BGP Router ID 1 BGP Peer BGP Router ID 2 Virtual PortChannel Layer 3 Link Layer 2 Link vPC VTEP-1 vPC VTEP-2 BGP Peer When vPC is enabled an ‘anycast’ VTEP address is programmed on both vPC peers Symmetrical forwarding behaviour on both peers provides Multicast topology prevents BUM traffic being sent to the same IP address across the L3 network (prevents duplication of flooded packets) vPC peer-gateway feature must be enabled on both peers VXLAN header is ‘not’ carried on the vPC Peer link (MCT link) vPC VTEP with Anycast VTEP Address
14
Scalability Limits
15
Nexus 2000 Series Fabric Extenders (FEX) Verified Scalability Limits
Feature 9500 Series Verified Limit 9300 Series Verified Limit Fabric Extenders and Fabric Extender server interfaces Not applicable 16 and 768 VLANs per Fabric Extender 2000 (across all Fabric Extenders) VLANs per Fabric Extender server interface 75 Port channels 500
16
Interfaces Verified Scalability Limits
Feature 9500 Series Verified Limit 9300 Series Verified Limit Generic routing encapsulation (GRE) tunnels 8 Port channel links 32 SVIs 490 250 vPCs 275 100 (280 with Fabric Extenders)
17
Layer 2 Switching Verified Scalability Limits
Feature 9500 Series Verified Limit 9300 Series Verified Limit MST instances 64 MST virtual ports 85,000 48,000 RPVST virtual ports 22,000 12,000 VLANs 4000 3900 VLANs in RPVST mode 500
18
Multicast Routing Verified Scalability Limits
Feature 9500 Series Verified Limit 9300 Series Verified Limit IPv4 multicast routes 32,000 8000 Outgoing interfaces (OIFs) 40 (see CSCum58876)
19
Layer 2 Switching Verified Scalability Limits
Feature 9500 Series Verified Limit 9300 Series Verified Limit MST instances 64 MST virtual ports 85,000 48,000 RPVST virtual ports 22,000 12,000 VLANs 4000 3900 VLANs in RPVST mode 500
20
Thank You
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.