Download presentation
Presentation is loading. Please wait.
1
Switched Campus Intranets
1134_04F8_c4
2
Network Design Engineer
Please see the white paper: Designing High Performance Campus Intranets with Multilayer Switching Geoff Haviland Network Design Engineer May 1998
3
Designing Intranets with Multilayer Switching
Technology Considerations Design Alternatives The Multilayer Design
4
Technology Considerations
Issues with flat-bridged networks Spanning tree limitations Switching at L2, L3, and L4 L2 = L3 = L4 switching performance Multilayer switch specialization High-performance design requirements Fast convergence Deterministic paths Deterministic failover Scalable size and throughput Server farms and the new 80/20 rule
5
The Key Aspects of L3 Switching (The Same Things Routers Do)
Packet Switching (NetFlow, Express Forwarding) Packet-by-Packet Switching in Hardware, L2 = L3 = L4 Performance Route Processing (OSPF, EIGRP, BGP4, PIM, RIP, etc) Path Determination, Load Balancing, and Summarization, Keys to Scalability and Stability Intelligent Network Services (HSRP, Access Lists, Mobility, TACACS, Debug, QoS, NTP, etc) Policy Enablers, Access Lists, Proxy Services, QoS Features, Debugging, Keys to Manageability, Troubleshooting, and Applications Availability
6
Campus Design with Switching
Spanning tree protocol Hub and router design Campus-wide VLANs MPOA Multilayer designs
7
Spanning Tree Considerations
Root election Root priority, Root ID Shortest path to the root Path cost, port priority, Port ID Convergence Forward delay, MAX age, Hello time Load balancing
8
Broadcast Domain (=Subnet =VLAN = ST)
Number of users? Network protocols like IP, IPX®, NetBIOS, routing protocols, OSPF, SAP, EIGRP, RIP Spanning tree considerations, load balancing, traffic flow MTU sizes, frame translations, frame fragmentation, SRB/SRS integration Broadcast domain is a failure domain!
9
Root Election B = Blocking F= Forwarding
BPDU = Bridge Protocol Data Unit Root Priority = 2 bytes (32768) Root ID = Root Priority + Bridge ID Lower Priority, Lower ID Port Cost = default (10^9/line-rate) Port ID = Port Priority + Port ID Blocked Port B Root Port Designated Port F F F F Root Port Designated Bridge Root Bridge Root Bridge Priority 200
10
Spanning Tree New Port Cost Values
Root Election Spanning Tree New Port Cost Values Based on 802.1D Spec Speed 10 Gbs 1 Gbs 622 Mbs 155 Mbs 100 Mbs 45 Mbs 16 Mbs 10 Mbs 4 Mbs Cost 2 4 6 14 19 39 80/63 100 250
11
Convergence Tuning: Diameter
Port States = Blocking, Listening, Learning, Forwarding BPDU Hello = 2 seconds (default) Forward delay = 15 seconds (default) Maximum Age = 20 seconds (default) Convergence = 2 * forward-delay + Maximum Age Root Port F Root Port Designated Port X F F F Root Port Designated Bridge Root Bridge set spantree root 10 Tuning: Diameter
12
STP Port States Blocking Listening Learning Forwarding
No traffic through port, still receiving BPDUs Listening Learning No traffic through port, and building bridge table Forwarding User traffic across port, and TX or RX BPDUs
13
Recovery Tuning Uplink Fast HSRP OSPF EIGRP STP
No tuning—3 seconds—wiring closet only Only applies to VLANS with loop (triangle) HSRP No tuning—2 seconds—distribution Use TRACKING feature OSPF Hello timer 1 sec, Dead timer 3 sec Recovery 6 seconds across backbone EIGRP Hello timer 1 sec, Hold timer 3 sec Recovery 3 seconds across backbone STP Tune “diameter” to 2 on root switch Improves recovery 50 sec to 15 sec Avoid STP loops and tuning!
14
The Hub and Router Model
Hierarchical and modular Deterministic for troubleshooting L3 features and Cisco IOS™ advantages Addressing scalability Multiprotocol routing and support Middleware, DLSw+, GNS proxy Proxy ARP, DHCP relay, debug Basis of many large Intranets
15
Traditional Router and Hub Campus
Workgroup Server Workgroup Server Workgroup Server Building A Building B Building C Access Layer Hub Hub Hub Distribution Layer The L2/L3 design provides for effective redundancy and load balancing. Workgroups have redundant connectivity to the distribution layer using the ISL trunking protocol. ISL trunking provides an efficient way to configure one or more workgroups on a wiring-closet switch. ISL trunking also provides an effective way for workgroups to span more than one switch. In the L2/L3 model enterprise servers are centralized with connectivity at the core layer. Access to all workgroups is by a single-hop, high-performance path. Workgroup-level servers may attach at the distribution layer, or at the wiring-closet switch in the access layer. Redundant connectivity to a switch domain (building) is provided by a pair of Catalyst 5000 switches with NFLS. Thus all workgroups have a redundant connection to the distribution layer, and in turn a redundant path to the core and out to the rest of the enterprise. FDDI Dual Homed Core Layer ATM FDDI Backbone Dual Homed VLAN Trunk FE FEC E or FE Port Token Ring Port Enterprise Servers FDDI Port
16
VLAN Design Considerations
A VLAN is a flat-bridged network Trunking and ATM LANE Campus-wide VLAN model Depends on 80/20 traffic pattern Router-on-a-stick considerations Useful tools for multilayer design
17
VLAN Technologies Y Z Workgroup Server Pink VLAN Client Pink VLAN
Green VLAN Workgroup Server Green VLAN Catalyst Switch X Catalyst® 5000 with LANE Card LANE Client (LEC) Pink, Purple, Green A ISL Attached Enterprise Server B C D ATM Switch ATM VLAN Trunk FE Server LANE Client (LEC) Pink, Purple, Green Workgroups: Pink Purple Green FEC E or FE Port Token Ring Port FDDI Port 1134_04F8_c4 17
18
VLAN Trunking Protocols
ISL (Inter-Switch Link, used on FE) Adds 30-bytes to the frame header (SAID, Security Association ID) Adds 16 bytes to the frame header DISL (Dynamic ISL) point to point Protocol over Fast Ethernet trunk LANE (LAN Emulation) Extending VLANs to ATM domain VTP (VLAN Trunk Protocol)
19
Traditional Campus-Wide VLAN Design
Building A Building B Building C Access Distribution X Core Server Distribution Four Workgroups: Blue Pink Purple Green Workgroup Server Green ISL Attached Enterprise Server Enterprise Servers 1134_04F8_c4 19
20
Campus-Wide VLANs and Multilayer Switching
Building A Building B Building C Access Layer Distribution Layer Core Layer FEC/ISL Server Workgroup Server Green Catalyst Multilayer Switch Four Workgroups: Blue Pink Purple Green X Server Distribution FE Attached Enterprise Servers FEC Attached Enterprise Server 1134_04F8_c4 20
21
MPOA Design Considerations
ATM to the wiring closet L3 switching with header rewrite MPOA handles IP unicast Multiprotocol packet flow? Multicast packet flow?
22
MPOA Campus Design Client Green VLAN Client Pink VLAN Access Switch
MPC X Enterprise Server MPC Multiprotocol Server MPS Routes First Packet of IP Unicast Flow Access Switch MPC ATM VLAN Trunk FE FEC FEC Attached Enterprise Server E or FE Port Token Ring Port Enterprise Servers FDDI Port
23
The Multilayer Design Model Must Solve These Issues
The new 80/20 traffic pattern IP mobility (use DHCP) Optimal use of multilayer switching Recovery for L2 and L3 failures Fast deterministic convergence High-performance server farms Scaling bandwidth Ethernet and ATM backbones
24
Benefits of Modular Design
Wiring Closet = Access Load Balancing and Redundancy to Wiring Closet Scalable Trunking FE, FEC, GE, GEC Fast Failover with Uplink Fast and HSRP L3, L4 Switching (=Routing) Intelligent Routing Protocols and Services, Fast Failover, Manageability Layer 2 Switch Ethernet Trunking Up the Riser Layer 3 Switch = Router Distribution Redundancy and Load Balancing across Multiple Paths FE-ISL or .1Q FEC E or FE Port Campus Backbone Frame or ATM Layer 2 or Layer 3 Trunking Capacity Scales as You Add Modules Gig E or GEC ATM OC-3 ATM OC-12
25
Generic Modular Campus Design
Building Building Building Access Layer Layer 2 Switch Distribution Layer Layer 3 Switch=Router Backbone Frame or ATM Layer 2 or Layer 3 Core Layer Server Distribution Layer 3 Switch=Router VLAN Trunk FE FEC E or FE Port GigE or GEC Layer 2 Switch ATM Server Farm
26
Multilayer Campus Design
L2 switching at wiring closet L3 or L4 switching at distribution L2 or L3 switching in backbone Or ATM/PNNI in backbone Hierarchical design scales Address summarization
27
Multilayer Campus Distribution L3/L4 Building Blocks
L3 routing across backbone Fast OSPF or EIGRP recovery Load balancing up to six paths L3/L4 at distribution layer L2, L3, L4 switching in ASICs Policy with no performance impact L3/L4 features and services Multiprotocol support and features Broadcasts off backbone
28
Multilayer Switching— Hierarchical Campus
FEC Attached Workstation Building A Building B Building C Access Layer Catalyst 5000 L2 Switch Distribution Layer Catalyst Multilayer Switch Core Layer Catalyst L2 Switch ISL Attached Building Server ATM VLAN Trunk FE FEC FEC Attached Enterprise Server E or FE Port Token Ring Port FDDI Port
29
Multilayer Model with Redundancy
Redundant building blocks 1000+ clients don’t lose connectivity L3 routing across backbone Load balancing Fast deterministic failover Redundant to wiring closet Uplink Fast for L2 HSRP for L3
30
Redundant Multilayer Campus Design
North West South Access Layer Distribution Layer A B C D Core Layer ISL Attached Building Servers The Catalyst 5000 switches with NFLS at the distribution layer provide redundant connectivity into the core. Within the core high capacity trunks use Fast Ethernet or Fast EtherChannel depending on bandwidth requirements. The Cisco IOS provides load balancing and fast failover across the core for network protocols. X Y ATM VLAN Trunk FE FEC E or FE Port FEC Attached Enterprise Server Enterprise Servers Token Ring Port FDDI Port
31
Enterprise Server Farm Design
Server distribution block Scalable bandwidth Consistent diameter two hops Deterministic and scaleable HSRP redundancy No default gateway issues No IP redirect issues Load balancing Server-to-server traffic off backbone
32
Multilayer Model with Server Farm
North West South Access Layer Distribution Layer Core Layer V W B Gigabit Ethernet Gigabit Ethernet ATM X Y Server Distribution VLAN Trunk FE A FEC E or FE Port Gigabit Ethernet FEC Attached Enterprise Server Token Ring Port FE Attached Enterprise Servers FDDI Port
33
Server Attachment in the Multilayer Model
VLANs A, B, C, D Access Layer Server M Workgroup D Distribution Layer Server N A, B, C, D Core Layer W X ATM HSRP HSRP Server Distribution VLAN Trunk FE FEC E or FE Port FE Attached Enterprise Servers Gigabit Ethernet Y Z FEC Attached Enterprise Server Token Ring Port FDDI Port
34
Wiring Closet Connectivity
Load balancing Use what you pay for Uplink Fast L2 convergence Three second failover HSRP L3 convergence Two second failover
35
Redundancy with HSRP Access Layer ISL Trunks VLAN Multiplexing
Host A Even Subnet Gateway Host B Odd Subnet 11.0 Gateway Host C Odd Subnet 15.0 Gateway Host D Odd Subnet 17.0 Gateway Access Layer ATM ISL Trunks VLAN Multiplexing Fast Ethernet or Fast EtherChannel® VLAN Trunk FE FEC E or FE Port Cisco’s Hot Standby Router Protocol (HSRP) provides gateway redundancy for IP hosts. Two Catalyst 5000 switches with the RSM act together to provide the gateway redundancy. If one Catalyst 5000 switch experiences a failure or a loss of connectivity, the other will automatically take over as gateway. The switch-over takes a few seconds; fast enough that most sessions will remain intact. HSRP also supports load balancing. Multiple addresses can be configured to act as primary gateway to different groups of hosts. This feature is called Multiple HSRP (MHSRP). Token Ring Port FDDI Port HSRP Primary Even Subnets, Even VLANs, 10, 12, 14, 16 X Y HSRP Primary Odd Subnets, Odd VLANs, 11, 13, 15, 17
36
HSRP with Tracking Determine the STP path Load balancing
Define root bridges per VLAN RSM1# interface vlan 2 ip address standby 1 ip standby 1 priority 300 track standby 1 preempt RSM2# ip address standby 1 priority 200 track VLAN 2 VLAN 3 B B Root VLAN 2 Root VLAN 3 vlan 99 vlan 99
37
VLAN Trunking for Load Balancing
VLANs 10, 11 VLANs 12, 13 VLANs 14, 15 VLANs 16, 17 A B C D F10 B11 F 11 B 10 F 12 B 13 F 13 B 12 F 14 B 15 F 15 B 14 F 16 B 17 F 17 B 16 F—Forwarding B—Blocking ATM VLAN Trunk FE ISL Trunks VLAN Multiplexing Fast Ethernet or Fast EtherChannel FEC E or FE Port Token Ring Port FDDI Port X Y STP Root Even VLANs 10, 12, 14, 16 STP Root Odd VLANs 11, 13, 15, 17
38
Uplink Fast Used for switches in the wiring closet
Changes the bridge priority and port/VLAN/cost parameter Grouping port Moves port directly from blocking state to forwarding state No topology change will be generated Generate proxy-multicast packets Does not work with LANE Limited VLANs to achieve fast convergence
39
VLAN Trunking with Uplink Fast Failover
VLANs 10, 11 VLANs 12, 13 VLANs 14, 15 VLANs 16, 17 A B C D F10 B11 F 11 B 10 F 12 B 13 F 13 B 12 F 14 B 15 F 15 B 14 F 16 B 17 F 17 B 16 X F—Forwarding B—Blocking ATM VLAN Trunk FE ISL Trunks VLAN Multiplexing Fast Ethernet or Fast EtherChannel FEC E or FE Port Token Ring Port FDDI Port X Y STP Root Even VLANs 10, 12, 14, 16 STP Root Odd VLANs 11, 13, 15, 17 Z
40
Scaling Ethernet Trunking
Switch, router, server connection Load distribution Ethernet bandwidth options Fast Ethernet Fast EtherChannel Gigabit Ethernet Gigabit EtherChannel VLAN trunking with ISL or 802.1Q
41
EtherChannel Built on 802.3 FD standard
Provides 200FDX or 400FDX (or 2G or 4G) Grouping 2/4 ports Load balancing based on (X- or) on SA, DA addresses; for multicast address based on source address GEC supported on Bladerunner Support spanning tree (3.1, or better) Only a software upgrade on the routers
42
EtherChannel Port-Aggregation Protocol (PAgP)
It runs on Fast EtherChannels only PAgP to avoid misconfiguration Four types of operation On, desirable, auto, off
43
PAgP Operation PAgP will be enable on the channels
Auto < > Auto Auto < > Desirable Desirable < > Desirable
44
Scaling Ethernet Trunk Bandwidth
Best B Good C OK VLANs 1, 2, 3, 4, 5, 6 VLANs 1, 2, 3, 4, 5 ,6 VLANs 1 2 3 FEC ISL VLANs 1, 2, 3, 4, 5, 6 400Mbps FDX Fast Ethernet ISL 1, 2 3, 4 5, 6 1 2 3 Fast Ethernet VLAN Trunk FE FEC E or FE Port Token Ring Port FDDI Port
45
Policy in the Backbone VLAN per policy VLAN per protocol
Logical or physical partition
46
Logical or Physical Partitioning of the Core
Domain A Domain B Domain C Distribution Layer Workgroup Servers Core Layer VLAN 100 IP Subnet VLAN 200 IPX Network BEEF0001 V W Server Distribution X Y ATM VLAN Trunk FE FEC E or FE Port Novell IPX® File Servers Token Ring Port IP Servers WWW FDDI Port
47
Multilayer with LANE Backbone
Ethernet up the riser Redundancy recommendations Two ELANs in the backbone SSRP for LES/BUS and LECS Dual PHY connectivity Same building blocks Hierarchical ATM and PNNI
48
Multilayer Model with ATM LANE Core
Domain A Building A Domain B Building B Domain C Building C Distribution Layer OC-3 or OC-12 Uplinks Cisco LECS Primary Core Layer ATM LANE LightStream® 1010 ATM Switch LECS Backup LS1010 ATM Switch LECS Backup The L2/L3 hierarchical model works equally well with an ATM backbone. The logical design of networks and subnets remains the same. Today ATM backbones are typically implemented using LAN Emulation or LANE. Native ATM enterprise servers attach as ATM LANE clients to an ATM switch in the core. In this picture Ethernet or fast Ethernet servers connect to the backbone by a Catalyst 5000 switch which acts as a LANE client to the ATM core. Alternatively, the ATM switch in the backbone can be combined with the Ethernet switch in a single Catalyst 5500 switch. Trunking within the core can be OC3 or OC ATM PNNI routing also supports load balancing over multiple links between adjacent ATM switches. Server Distribution Catalyst 5000 LES/BUS Primary Catalyst 5000 LES/BUS Backup X Y ATM VLAN Trunk FE FEC Attached Enterprise Server FEC E or FE Port Enterprise Servers
49
IP Multicast Forwarding and CGMP/IGMP
NO CGMP With CGMP PIM PIM IGMP Request for Group 1 Multicast Group 1 Multicast Group 1 CGMP Multicast Frames Forwarded to Every Switch Port in a VLAN IGMP Request for Group 1 A B C D A B C D CGMP prevents flooding of unnecessary multicast traffic Uses standard IGMP on user end-station Router optimizes switch multicast forwarding table 1134_04F8_c4 49
50
IP Multicast Routing Protocol Roundup
DVMRP CBT Dense Representation Only First Generation Technology PIM Interoperation Protocol-Dependent Scalability Limitations Sparse Representation Increased Efficiency Improved Scalability Limited Commercial Deployment MOSPF PIM Dense and Sparse-Mode Dense Representation Only Routing Protocol-Dependent Not Efficient Scalability Limitations Limited Industry Experience Dense or Sparse Representation Scalable Efficient Not Routing Protocol-Dependent Suitable for All Environments
51
Multicast Firewall and Backbone
Clients for Multicast A Only Clients for Multicast B Only Clients for Multicast C Only Distribution Layer B Only C Only A Only Multicast VLAN 100 IP Subnet Unicast VLAN 200 IP Subnet Server Distribution X ATM VLAN Trunk FE FEC E or FE Port Gigabit Ethernet Token Ring Port Multicast Server Farm Unicast Server Farm FDDI Port
52
WAN Distribution Building Block
Redundant firewall example Attaches to backbone Peer module across backbone
53
WAN Distribution Building Block
Layer Core Layer Multilayer Switch as Inner Firewall Router Bastion Hosts Web Servers Firewall Devices in the DMZ WAN Distribution ATM Outer Firewall Routers VLAN Trunk FE FEC E or FE Port Token Ring Port FDDI Port To Internet Service Providers (ISPs) 1134_04F8_c4 53
54
Spanning Tree Boundary
Bridge 1 protocol DEC int fast 0/0.1 ip address standby ip encap ISL bridge-group 1 Bridge 1 protocol DEC int fast 0/0.1 ip address standby ip encap ISL bridge-group 1 Bridge 1 protocol DEC int fast 1/1 bridge-group 1
55
RSM Two SAGE channels to bus
Maximum 400 Mbps throughput and 170K pps optimum switching Logical interface per VLAN Multiprotocol network, routing protocols, HSRP NFFC—IP switching in hardware
56
L3 Switching with the NFFC
Router multicasts its MAC address/VLAN First packet is candidate packet Switch would enable the flow once the packet is back from the router Packets in flow switched by NFFC Host A Host B VLAN 2 VLAN 3
57
L3 Switching with the NFFC
Flow is a unidirectional sequence of packets between two end points with respect of the transport layer NFFC parses packets in the hardware as far as transport layer MLSP (MultiLayer Switch Protocol) used to inform switches of the routing and access-list changes
58
L3 Switching with the NFFC
NFFC (NetFlow Feature Card) Mounted on SUPIII, used with RSM NetFlow data collection to flow level Policy (ACL) to flow level
59
L3 Switch Specialization
Common IOS Services Routing: OSPF, EIGRP Multicast: PIM BOOTP and DHCP VLAN HSRP Load Balancing Management: CDP, NTP Access Lists TACACS + Wiring Closet/Distribution Backbone/Data Center LAN/WAN Integration Port Density Scalability Advanced Services Catalyst 5500 Catalyst 8500 Cisco 7500 Specialized Services(Platform Specific) Cisco 7500 Catalyst 5500 Catalyst 8500 Multiprotocol Routing SNA Integration NetFlow Accounting Extensive IP QoS Voice, Security Encryption/Compression 1-3 Mpps IP Routing Multiprotocol Routing NetFlow Accounting VLAN Architecture QoS Enablers 5-24 Mpps IP, IPX and IP Multicast Routing QoS Enforcement VLAN Aware Per Flow Queuing 1134_04F8_c4 59
60
Catalyst 8500 Multilayer Switch Backbone or Data Center Distribution
Cisco IOS Cisco Express Forwarding Wire-Speed IP, IPX Switching Cisco IOS Routing Protocols Wire-Speed IP Multicast Extensive QoS Capabilities Access Control Lists Catalyst Slot Catalyst 8540 13 Slot Chassis Nonblocking fabric Hot swappable line cards Redundancy Performance 10 Gbps to 40 Gbps switching fabric Wire-speed (5-24 Mpps) throughput Line Modules 10/100 , GE, FE/GE channel OC-3, OC-12, OC-48 ATM and PoS Uplinks 1134_04F8_c4 60
61
ACME Campus Case Study (From Datacomm Shootout)
2200 stations Six buildings Enterprise server farm Distributed departmental servers Mixture of hubs, cabling, routers
62
Case Study Solution High-Performance L3 Backbone
L3 switched backbone Catalyst 8540 based Non-blocking switching Load balancing within backbone Hig-capacity trunking FE, FEC, GE, GEC HSRP for server farms
63
Case Study—Collapsed Backbone
M1 RD1 and RD2 A1 and A2 Access Layer Catalyst 55xx Distribution Layer Catalyst 5505 Servers Servers Servers M2 Catalyst 8540 VLAN Trunk FE FEC FE Attached Enterprise Servers FEC Attached Enterprise Server E or FE Port Gigabit Ethernet
64
M2 Module: Non-Blocking
Backbone and Enterprise Server Farm Catalyst 5505 Layer 2 Switches (6) 96 Ports 10/100 Two Subnets per Switch Access HSRP Uplink Fast GE Trunk to M2 GE Trunk to M2 FEC Trunk to A1 FEC Trunk to A1 FE Trunk to RD1 FE Trunk to RD1 Catalyst 8540 Catalyst 8540 Enterprise Server Farm HSRP To WAN FE Attached Enterprise Servers FEC Attached Enterprise Server VLAN Trunk FE FEC E or FE Port Gigabit Ethernet
65
Advantages of the Multilayer Model
Scales as you add modules Performance scales Addressing scales Fast deterministic convergence L3 routing across backbone Uplink Fast and HSRP to wiring closet Load balancing Use what you pay for
66
1134_04F8_c4 66
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.