Download presentation
Presentation is loading. Please wait.
1
August 20 th, 2001 1 A 2.5Tb/s LCS Switch Core Nick McKeown Costas Calamvokis Shang-tse Chuang Accelerating The Broadband Revolution P M C - S I E R R A
2
August 20 th, 2001 2 Outline 1.LCS: Linecard to Switch Protocol What is it, and why use it? 2.Overview of 2.5Tb/s switch. 3.How to build scalable crossbars. 4.How to build a high performance, centralized crossbar scheduler.
3
August 20 th, 2001 3 Next-Generation Carrier Class Switches/Routers Switch Core Linecards
4
August 20 th, 2001 4 1.Large Number of Ports. Separation enables large number of ports in multiple racks. Distributes system power. 2.Protection of end-user investment. Future-proof linecards. 3.In-service upgrades. Replace switch or linecards without service interruption. 4.Enables Differentiation/Intelligence on Linecard. Switch core can be bufferless and lossless. QoS, discard etc. performed on linecard. 5.Redundancy and Fault-Tolerance. Full redundancy between switches to eliminate downtime. Benefits of LCS Protocol
5
August 20 th, 2001 5 Main LCS Characteristics 1.Credit-based flow control Enables separation. Enables bufferless switch core. 2.Label-based multicast Enables scaling to larger switch cores. 3.Protection CRC protection. Tolerant to loss of requests and data. 4.Operates over different media Optical fiber, Coaxial cable, and Backplane traces. 5.Adapts to different fiber, cable or trace lengths
6
August 20 th, 2001 6 Linecard LCSLCS LCSLCS 1: Req LCS Ingress Flow control 3: Data Switch Scheduler Switch Scheduler 2: Grant/credit Seq num Switch Fabric Switch Fabric Switch Port Req Grant
7
August 20 th, 2001 7 LCS Adapting to Different Cable Lengths Switch Scheduler Switch Scheduler Switch Fabric Switch Fabric LCSLCS LCSLCS LCSLCS Switch Core Linecard
8
August 20 th, 2001 8 LCS Over Optical Fiber 10Gb/s Linecards 10Gb/s Linecard LCSLCS Switch Scheduler Switch Scheduler Switch Fabric Switch Fabric 10Gb/s Switch Port LCSLCS 12 multimode fibers 2.5Gb/s LVDS GENET Quad Serdes
9
August 20 th, 2001 9 Example of OC192c LCS Port LCS Protocol to OC192 Linecard 12 Serdes Channels
10
August 20 th, 2001 10 Outline 1.LCS: Linecard to Switch Protocol What is it, and why use it? 2.Overview of 2.5Tb/s switch. 3.How to build scalable crossbars. 4.How to build a high performance, centralized crossbar scheduler.
11
August 20 th, 2001 11 Main Features of Switch Core 2.5Tb/s single-stage crossbar switch core with centralized arbitration and external LCS interface. 1.Number of linecards: 10G/OC192c linecards: 256 2.5G/OC48c linecards: 1024 40G/OC768c linecards: 64 2.LCS (Linecard to Switch Protocol): Distance from line card to switch: 0-1000ft. Payload size: 76+8B. Payload duration: 36ns. Optical physical layers: 12 x 2.5Gb/s. 3.Service Classes: 4 best-effort + TDM. 4.Unicast: True maximal size matching. 5.Multicast: Highly efficient fanout splitting. 6.Internal Redundancy: 1:N. 7.Chip-to-chip communication: Integrated serdes.
12
August 20 th, 2001 12 2.56Tb/s IP router LCS 1000ft/300m Port #256 Port #1 2.56Tb/s switch core Linecards
13
August 20 th, 2001 13 Port Processor optics LCS Protocol optics Port Processor optics LCS Protocol optics Crossbar Switch core architecture Port #1 Scheduler RequestGrant/CreditCell Data Port #256
14
August 20 th, 2001 14 Outline 1.LCS: Linecard to Switch Protocol What is it, and why use it? 2.Overview of 2.5Tb/s switch. 3.How to build scalable crossbars. 4.How to build a high performance, centralized crossbar scheduler.
15
August 20 th, 2001 15 How to build a scalable crossbar 1.Increasing the data rate per port Use bit-slicing (e.g.Tiny Tera). 2.Increasing the number of ports Conventional wisdom: N 2 crosspoints per chip is a problem, In practice: Today, crossbar chip capacity is limited by I/Os. It’s not easy to build a crossbar from multiple chips.
16
August 20 th, 2001 16 Scaling: Trying to build a crossbar from multiple chips 4 inputs 4 outputs Building Block: 16x16 crossbar switch: Eight inputs and eight outputs required!
17
August 20 th, 2001 17 Cell time INT 2x4 (2 I/Os) Scaling using “interchanging” 4x4 Example Cell time Reconfigure every cell time Reconfigure every cell time AA AA BB BB Reconfigure every half cell time Reconfigure every half cell time AA AA B B B B AB AB AB AB 4x4
18
August 20 th, 2001 18 2x2 fixed “TDM” 2.56Tb/s Crossbar operation 128 inputs 128 outputs Crossbar A Interchanger Crossbar B AB AA BB 2x2 fixed “TDM” AB 128 x 256 xbar 128 x 256 xbar A AAA B BB B Interchanger
19
August 20 th, 2001 19 Outline 1.LCS: Linecard to Switch Protocol What is it, and why use it? 2.Overview of 2.5Tb/s switch. 3.How to build scalable crossbars. 4.How to build a high performance, centralized crossbar scheduler.
20
August 20 th, 2001 20 How to build a centralized scheduler with true maximal matching? Usual approaches 1.Use sub-maximal matching algorithms (e.g. iSLIP) Problem: Reduced throughput. 2.Increase arbitration time: Load-balancing Problem: Imbalance between layers leads to blocking and reduced throughput. 3.Increase arbitration time: Deeper pipeline Problem: Usually involves out-of-date queue occupancy information, hence reduced throughput.
21
August 20 th, 2001 21 How to build a centralized scheduler with true maximal matching? Our approach is to maintain high throughput by: 1.Using true maximal matching algorithm. 2.Using single centralized scheduler to avoid the blocking caused by load-balancing. 3.Using deep, strict-priority pipeline with up-to- date information.
22
August 20 th, 2001 22 Strict Priority Scheduler Pipeline Port Processor optics LCS Protocol optics Scheduler Priority p=0 Scheduler Priority p=1 Scheduler Priority p=2 Scheduler Priority p=3 Scheduler Plane for 2.56Tb/s 1 priority Unicast and multicast
23
August 20 th, 2001 23 multicast Strict Priority Scheduler Pipeline Port Processor optics LCS Protocol optics Scheduler Priority p=0 Time 0123456 P=0 P=1 P=2 P=3 P=1 P=2 P=3 P=2 P=3 Scheduler Priority p=1 Scheduler Priority p=2 Scheduler Priority p=3 P=0 P=1 P=2 P=0 P=1
24
August 20 th, 2001 24 Strict Priority Scheduler Pipeline Why implement strict priorities in the switch core when the router needs to support such services as WRR or WFQ? 1.Providing these services is a Traffic Management (TM) function, 2.A TM can provide these services using a technique called Priority Modulation and a strict priority switch core.
25
August 20 th, 2001 25 Outline 1.LCS: Linecard to Switch Protocol What is it, and why use it? 2.Overview of 2.5Tb/s switch. 3.How to build scalable crossbars. 4.How to build a high performance, centralized crossbar scheduler.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.