Download presentation
Published byClemence McBride Modified over 9 years ago
1
Next Generation Spine & Core Data Center Switches
2
Virtual Chassis Fabric
Access Switches IP Fabric MPLS Fabric MC-LAG Network Overlays VxLAN Virtual Chassis Fabric Flexibility of architectures 10GbE/40GbE access and spine solutions Software innovations
3
Simple Ethernet Fabric
Spine Switches IP Fabric MPLS Fabric MC-LAG Network Overlays VxLAN Simple Ethernet Fabric ? High density 40GbE and 100GbE Supporting 10GbE & 25GbE transitions in access Evolving Cloud data center architectures (VxLAN, MPLS) Platform for continued data growth
4
Introducing QFX10000 Series Spine Switches
FIXED & MODULAR 10G / 40G / 100G SPINE / CORE SWITCHES Powered by Juniper custom silicon Meet rapid and continuing data growth MOST SCALABLE Accelerate innovation OPEN Invest for today and tomorrow FUTURE PROOF
5
Fixed QFX10000 series switches
6
QFX Q Fixed platform to support transition from 10GbE to 40GbE and 100GbE Compact form factor with high density
7
QFX Q Fixed platform to support transition from 10GbE to 40GbE and 100GbE Half the port density of QFX Q for smaller spine configuration
8
QFX10002-72Q and QFX10002-36Q Front to back cooled system
Resilient power design for mission critical applications
9
100GbE Support 100GbE SUMMARY
QFX Q QFX Q Within a group of 3 ports , one of the port can be used as 100GbE port (SR4, LR4) The ports marked in red become disabled if 100GbE is in use within a port group (3 ports make up the port group) All ports can be 40GbE All ports can be 4X10GbE 100GbE SUMMARY
10
Port Numbering Conventions
set chassis fpc <slot> pic <pic-slot> {port|port-range} <0..71> channel-speed {10g|40g} et – 40GbE/100GbE xe – 10GbE ( xe 0/0/0:0-3) GbE – 1GbE
11
QFX10002 system architecture
Fabric Chip QFX10002 system architecture Port ASIC QFX Q QFX Q Multi Chip System to scale I/O and support flexible applications Each PFE provides 12X40GbE , 4X100GbE OR 48X10GbE interfaces 6 PFEs for QFX Q and 3 PFEs for QFX Q 2 Fabric ASICs for QFX Q and single fabric asic for QFX Q SYSTEM ARCHITECTURE SUMMARY
12
PACKET BUFFERING SUMMARY
QFX10002 Packet Buffering Packet Buffer QFX Q QFX Q Each PFE Chip in QFX10000 has 4GB of external packet memory to accept packets as they ingress the system Optimized for system use is variety of applications including Data Center Edge Differentiates the system from “Switch on Chip” Systems where packet memory is internal to the PFE Provides good performance for applications without ECN (Explicit Congestion Notification) or DC-TCP PACKET BUFFERING SUMMARY
13
QFX10002 Table Memory TABLE MEMORY SUMMARY
Packet Buffer Table Memory QFX Q QFX Q Each PFE Chip has dedicated memory for storing forwarding tables (256K IPv4 FIB, 256IPv6 FIB, 2M host routes) Allows collapsing of multiple layers of networking in data centers (DCI and DC Core/Spine) Small on chip memory per PFE for MAC , ACL and MPLS tables TABLE MEMORY SUMMARY
14
PFE ASIC PFE SUMMARY 500G Chip ( 500G in, 500G out)
4X100GbE interfaces when providing 10GbE , 40GbE and 100GbE 5X100GbE interfaces when providing 40GbE or 100GbE interfaces PFE SUMMARY
15
Hybrid Memory Cube High bandwidth, energy efficient, high density memory Critical to building a high I/O system 10.2x the bandwidth of DDR3 module 8.5x the bandwidth of DDR4 module
16
System Scale FEATURE QFX10002-36Q QFX10002-72Q Mac Addresses 256K 512K
L2 Domains 16K L3 VPN Scale 4K Filters 64K match conditions 8K attachment points Policers & Counters MPLS 64K FIB 256K IPv4, 256K IPv6 Host Scale 2M VxLAN 32K
17
Fabric ASIC Crossbar fabric that provides PFE-to-PFE connectivity using CCL (Chip to Chip Links) using 25GbE Serdes 144 X 25GbE SERDES Uses 1XP protocol Cell based Fabric ( more details to follow) SUMMARY
18
PFE to FABRIC Connectivity
12 X 40GbE 4X100GbE 48X10GbE Fabric Fabric Chip PFE Port ASIC 25Gbps 400Gbps
19
Cell Based Forwarding SUMMARY
Fabric Fabric Fabric Chip Cells are forwarded on all available fabric links. Port ASIC On Fabric interface, packet is split into N cells of variable length. Cells are received and reassembled into a packet PFE PFE PFE PFE PFE PFE Packet arrives on input port and buffered. Input Port Output Port Page is sent across the fabric (8KB or 16KB) Cell sizes can vary between 96B, 112B, 128B, 144B, 160B and 176B SUMMARY
20
Virtual Output Queueing
Problem Statement (Egress buffering) Fabric Port1 1 Ingress Port3 Egress Port2 2
21
Virtual Output Queueing
Head of Line Blocking with Egress buffering only Fabric Port3 Port1 Back Pressure Ingress Port3 Egress
22
Virtual Output Queuing
Fabric Port1 1 Ingress Port3 Egress Port2 2 Ingress VoQ Buffering
23
Benefits of VoQ Queueing for each egress port maintained on ingress PFE One place to queue the traffic Traffic not sent over the fabric if egress port not ready to transmit the traffic resulting in very high system utilization Prevents Head of Line Blocking
24
VoQ Virtual Output Queues Fabric Fabric PFE PFE PFE PFE PFE PFE
Packet Buffer Table Memory PFE 4GB PFE PFE PFE PFE PFE Virtual Output Queues
25
VoQ Architecture for 72X40GbE Ports
4GB Buffer VOQ P1 P72 Each PFE has 8 queues X 72 ports for a 72X40GbE system 576 Virtual Output Queues per PFE 3,456 Virtual Output Queues per QFX Q System running in 72X40GbE mode Per Port buffer 57MB VoQ STRUCTURE
26
VoQ Architecture for 72X40GbE Ports
Virtual Output Queues In QFX Q System with 72X40GbE ports
27
Virtual Output Queuing
SUMMARY Virtual Output Queue refers to a queue on output port that is maintained by the input port Each PFE maintains 8 Virtual Output queues for all ports VOQs can be of varying size “Use-meters” to determine buffer allocation fairness 72X40GbE system 3456 Virtual Output Queues 288X10GbE system 13824 Virtual Output Queues 24X100GbE system 1152 Virtual Output Queues Mixed mode system : 56 X 40GbE + 4 X 100GbE 2880 Virtual Output Queues
28
Packet Forwarding Pipeline
500 Mpps packet processing/lookup capacity to accommodate for multiple lookups within the pipeline 333Mpps packet throughput capacity PACKET PIPELINE
29
QFX10002-72Q and QFX10002-36Q Control Plane
SUMMARY 4-core Intel Gladden Ivy-Bridge CPU 16GB, 1333MHz of un-buffered ECC DDR3 SDRAM Field-upgradable BIOS 10G connection between CPU and each PFE Two Slim SATA Gen 2 SSD slots on board with 32GB NAND Flash module Provides a reset push-button*
30
Modular QFX10000 series switches
31
Modular QFX10000 Series Switches
6Tbps/Slot 21 RU Width 17.4” Depth 35” 6Tbps/Slot 480 X 100GbE 576 X 40GbE 16 I/O Slots 13 RU 8 I/O Slots Width 17.4” Depth 32” 240 X 100GbE 288 X 40GbE
32
QFX10008 SUMMARY Dimensions: Height = 22.55” (13 RU) Width = 17.4”
6Tbps/Slot SUMMARY Dimensions: Height = 22.55” (13 RU) Width = 17.4” Depth = 32” Orthogonal Direct Connectors No midplane between Line Cards and Fabric Cards 8 Linecards/ IO slots 2 Control Boards / Route Engines 6 SIBs/ Fabric cards 2 Fan Trays 2 Fan Controller Boards 6 Power Supplies Airflow: Front-to-Back 13 RU Depth 32” Width 17.4” 288 X 40GbE 240 X 100GbE
33
QFX10016 SUMMARY 6Tbps/Slot 21 RU Depth 32” Width 17.4” 576 X 40GbE
Dimensions: Height = 36.65” (21 RU) Width = 17.4” Depth = 35” Orthogonal Direct Connectors No midplane between Line Cards and Fabric Cards 16 Linecards/ IO slots 2 Control Boards / Route Engines 6 SIBs/ Fabric cards 2 Fan Trays 2 Fan Controller Boards 10 Power Supplies 2850W AC or DC 220V input Cable type C19 220V input Feed redundancy N+1 Airflow: Front-to-Back Operational Temperature = 6000 21 RU Depth 32” Width 17.4” 576 X 40GbE 480 X 100GbE
34
Detail Overview – Rear View
Add the 16 slot QFX10008 QFX10016
35
Midplane Less Design Power and Cooling efficiency Reliability
Future Scale
36
Modular Line Card Options
36X40GbE with QSFP+ 12X100GbE with QSFP28 144X10GbE with 40 to 4X10GbE Breakout 30X100GbE with QSFP28 24X40GbE + 6X100GbE
37
144X10GbE with 40 to 4X10GbE Breakout
100GbE Ports on 36 Port Card 1 5 29 25 23 19 17 13 11 7 35 31 4 2 6 10 8 12 16 14 18 22 20 24 28 26 30 34 32 3 9 15 21 27 33 100GbE ports 36X40GbE with QSFP+ 12X100GbE with QSFP28 144X10GbE with 40 to 4X10GbE Breakout
38
84X10GbE with breakout cables
10GbE Card Same 100GE stuff for the 10GE line card 60X10GbE + 6QSFP+ 2 X100GbE QSFP28 84X10GbE with breakout cables
39
84X10GbE with breakout cables
10GbE Card Same 100GE stuff for the 10GE line card 100GbE 60X10GbE + 6QSFP+ 2 X100GbE QSFP28 84X10GbE with breakout cables
40
36X40GbE Card Fabric Fabric Fabric Fabric Fabric Fabric PFE PFE PFE
Packet Buffer Table Memory Fabric Fabric Fabric Fabric Fabric Fabric 6X25G PFE 4GB PFE 4GB PFE 4GB 12 X 40GbE 4X100GbE 48X10GbE 12 X 40GbE 4X100GbE 48X10GbE 12 X 40GbE 4X100GbE 48X10GbE
41
30X100GbE Card Fabric Fabric Fabric Fabric Fabric Fabric PFE PFE PFE
Packet Buffer Table Memory Fabric Fabric Fabric Fabric Fabric Fabric 6X25G PFE 2GB PFE 2GB 5X100GbE 5X40GbE PFE 2GB 5X100GbE 5X40GbE PFE 2GB 5X100GbE 5X40GbE PFE 2GB 5X100GbE 5X40GbE PFE 2GB 5X100GbE 5X40GbE 5X100GbE 5X40GbE
42
60X10GbE Line Card Fabric Fabric Fabric Fabric Fabric Fabric PFE PFE
Packet Buffer Table Memory Fabric Fabric Fabric Fabric Fabric Fabric 6X25G PFE 4GB PFE 4GB 48X10GbE 48X1GbE 12X10GbE+6X40GbE 12X10GbE+2X100GbE
43
System Scale QFX1000 (8 slot & 16 slot) FEATURE Mac Addresses 1M
L2 Domains 16K L3 VPN Scale 4K Filters 64K match conditions 8K attachment points Policers & Counters MPLS 64K FIB 256K IPv4, 256K IPv6 Host Scale 2M VxLAN 32K
44
Control Board for Modular QFX10000 switches
SUMMARY CPU: Intel Ivy Bridge Gladden 4-core 2.4 GHz Memory: 8GB onboard NAND Flash, 16GB SDRAM 2 removable SATA SSD slots, 1 USB 2.0 4 SFP+ ports PTP: RJ45 for Grandmaster clock 1 SFP or RJ45 management port, 1 console port Same RE / CB in 8 and 16 I/O slot systems
45
Software Architecture
CLI, XML, Netconf, UNIX and API Access OPEN APIs PFE PLATFORM CARRIER-CLASS NETWORKING EXTENSIBLE USER SPACE 3rd party applications HARDWARE ABSTRACTION LAYER KVM KVM Yocto Linux CARRIER-GRADE LINUX DATA PLANE x86 CONTROL PLANE FLASH MEMORY (UBOOT + ONIE)
46
Software Architecture
SOFTWARE ARCHITECTURE HIGHLIGHTS Decouple Platform & PFE Daemons from Junos Accelerate TTM for product variants (use Linux drivers & abstractions for new platforms) Improve performance – Multicore CPU Improve BFD performance Mac learning (by taking two big processes out of JUNOs) Convergence Direct access to PFE via APIs (Future) Pace of innovation
47
Optics Options for Data Center
DAC (1-5m) 4X10GbE LR(10Km) 4X10GbE IR(2Km) 4X10GbE SR (300m) 10GbE 40GbE LX4 (150m) 40GbE SR4 (300m) 40GbE ESR4(400m) 40GbE LR4(10Km) 40GbE IR4(2Km) DAC (1m-7m) 40GbE 100GbE SR4(100m) 100GbE LR4(10Km) 100GbE CWDM4 (2Km) 100GbE Add lengths
48
What is Junos Fusion? Data center networking with simplified management at scale Fusion Open Standards & programmability IEEE 802.1BR and JSON-RPC APIs Resilient Plug-and-play provisioning 1GbE-100GbE
49
Thank You
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.