Download presentation
Presentation is loading. Please wait.
Published byWesley Leonard Dickerson Modified over 7 years ago
1
aci overview Temi Ajasa - Systems Engineer
Allen McClure – Systems Engineer
2
AGENDA Application Centric Infrastructure Overview Application Centric Infrastructure Policy Model Nexus 9000 Hardware Q and A
3
APPLICATION-CENTRIC Infrastructure
NEXUS SERIES Application Policy Infrastructure Controller APIC Industry Leading ECOSYSTEM OPEN STANDARDS OPEN SOURCE
4
Market Trends requires open apis, open source approach
APPLICATIONS PHYSICAL + VIRTUAL 60–80% OF WORKLOADS VIRTUALIZED HADOOP, BIG DATA AND ANALYTICS ~21% OF PHYSICAL SERVERS VIRTUALIZED BY 2016 HYPERVISOR FRAGMENTATION Hypervisor 42% OF BUSINESSES USE MULTIPLE HYPERVISORS PRIVATE/PUBLIC CLOUD Private Cloud Enterprise IT Organizations Public Cloud Service Provider Cloud 2 OUT OF 3 US BASED MIDSIZE FIRMS WILL USE CLOUD SERVICES INTEGRATED DEVELOPMENT AND OPERATIONS open RESTful APIs, open source
5
A New OPEN Operating Model is Required
TRADITIONAL NETWORK MODEL TODAY’S SDN DATACENTER MODEL FUTURE OPEN MODEL Network of Boxes Software-Based Network Virtualization Application Centric Infrastructure Needs Agility and Time to Applications Lacks Scale, Visibility, Security More Complexity Decreases Reliability Disjointed Overlay and Underlay Open Source, Open APIs Physical and Virtual Radical Simplification Policy and Automation Scale and Security Visibility and Troubleshooting
6
Modern Data Center Network Properties
Removing Overloaded Semantics Classical approach to connectivity requires mapping the various connectivity service layers manually ACI directly maps the application connectivity requirements onto the fabric Security is ‘always’ enabled Fabric is application aware Services inserted dynamically Redirect and Load Balance Connectivity IP Address, VLAN, VRF Application Specific Connectivity Dynamic provisioning of connectivity explicitly defined for the application Application Requirements Application Requirements IP Addressing Control & Audit Connectivity (Security – Firewall, ACL, …) IP Address, VLAN, VRF Enable Connectivity (The Network)
7
Aci building blocks next generation nexus—TRADITIONAL NETWORKS
SIMPLE, SECURE Aci building blocks Future proof—Software upgradable to ACI Aci building blocks next generation nexus—TRADITIONAL NETWORKS OPTIMIZED NX-OS OPEN RESTFUL APIS CENTRALIZED POLICY MODEL OPEN SOURCE CONTROLLER APIC POLICY MODEL NEXUS 9500 and 9300 BUILT-IN LINE RATE END POINT DIRECTORY APIC innovations In software hardware and system design Price Power Efficiency Programmability Port Density PERFORMANCE SCALE OUT WITHOUT COMPROMISE COMMON BUILDING BLOCKS - ACCESS AND CORE INTEGRATED OVERLAY 40G NON-BLOCKING FABRIC >_ 50% SIMPLER CODE BASE FUTURE PROOF UPGRADABLE TO ACI PROGRAMMABILITY AND AUTOMATION NETWORK VIRTUALIZATION SUPPORT RESILIENCY: IN SERVICE PATCHING, UPGRADE, FAST RESTART ACI
8
Enabled by physical and virtual integration
ACI: Rapid deployment of applications onto networks with SCALE, security and full visibility Tenant HEALTH SCORE Application HEALTH SCORE LATENCY LATENCY 3 5 Microsecond(s) Microsecond(s) VISIBILITY VISIBILITY 35 VMs Application Delivery Controller 16 VMs Application Delivery Controller 2 Physical Networking L4–L7 Services Multi DC WAN and Cloud Compute Storage Hypervisors and Virtual Networking 8 Physical Firewall Physical Firewall Systems Telemetry Systems Telemetry Packet Drops 25 Packet Drops Enabled by physical and virtual integration
9
Application centric infrastructure INVESTMENT PROTECTION
Nexus 9500 APIC Nexus 9300 and 9500 Hypervisors and Virtual Networking Physical Networking L4–L7 Services Multi DC WAN and Cloud Compute Storage Nexus 2K Nexus 7K Integrated WAN Edge
10
QoS Bandwidth Reservation Availability Application L4-L7 Services
AGILITY: Any application, anywhere—Physical and virtual common application network profile WEB app db f/w ADC ADC SLA QoS Security Load Balancing APPLICATION NETWORK PROFILE APIC Extensible Scripting Model Connectivity Policy Security Policies QoS Bandwidth Reservation Availability Application L4-L7 Services Storage and compute HYPERVISOR
11
Open Source ACI Policies
APPLICATION-CENTRIC POLICY DEFINITION Orchestration and Automation ACI Extensions ACI Extensions Network Controller APIC ACI API Extensions ACI Fabric Hypervisor ACI Extensions ACI Extensions Physical Network Application policy model Open source technology Community driven Stand alone Nexus 9000 Switch, traditional networks Application policy model with hardware acceleration Any application, any where Best in class scale and performance Real-time network telemetry Nexus 9000 ACI Mode Cisco Open Source Solution Cisco ACI Solution OPEN SOURCE COMMUNITY
12
Decouple Application & policy from IP infrastructure
SIMPLIFICATION 10,000S ACLS COMPLEX QOS MULTIPLE MANAGEMENT POINTS EXCESSIVE PROTOCOLS FLOODING CENTRALIZED SECURITY AND QOS POLICY NO FLOODING ROUTED NETWORK FULL HOST MOBILITY COMMON POLICY Decouple Application & policy from IP infrastructure IP NETWORK
13
Elasticity at scale / pay as you grow Built for the Growing Commercial Enterprise to the Largest Service providers 64,000 TENANTS 1 MILLION IPv4 / IPv6 END POINTS 576 40G PORTS WIRE-RATE (PER SPINE) APIC 60 Tbps Capacity (per spine) 8K MULTICAST GROUPS (per LEAF) 100K+ 3456 5260 4854 2268 1286 6912 13824 35860 44652 8598 27648 11592 22584 18632 288 PORTS
14
Application Centric infrastructure Security Security with ACI
APIC Open APIs Policy Engine APPLICATION NETWORK PROFILE Centralized Compliance and Auditing Services Chaining Automated Import / Export Policy via API (Support for External Policy Engines) Policy Separated from Network Forwarding Complete Isolation with Full Scalability and Security Engineering Sales HR Finance Legal Legal and Marketing Marketing ENABLING A DYNAMIC ENTERPRISE WITHOUT COMPROMISE
15
COMMON HARDWARE PLATFORM TWO operational models
Traditional networks OPTIMIZED NX-OS Application Centric Infrastructure APIC Software Upgradable to ACI PROGRAMABILITY— 1/10/40 GE, 100 GE READY PRICE/PERFORMANCE Q4 2013 Q2 2014
16
Nexus 9000 Series Switches Family of fixed and modular switches
Foundation for Application Centric Infrastructure (ACI) Runs in two operating modes – Cisco® NX-OS and ACI Delivers industry-leading 10/40 Gb platform for: Price and performance Power Programmability Establishes Cisco leadership in 40 Gb density and performance Designed for future upgrade to 100 Gb
17
Build a Better Switch Merchant+ Foundation
State of the Art Mechanical Design Object Oriented Programmable Operating System Next Generation Development and Verification Methodology Two Modes of Operation NXOS Fabric Mode ACIOS + APIC
18
Overview High Port Density Line-Rate Performance on All Ports
Low Latency VxLAN Bridging/Gateway/Routing Highly integrated switch and buffer functionality Only 2-4 ASICs per line card No buffer bloat Mix of 28nm Cisco and 40nm Broadcom ASICs Power Efficiency Platinum rated power supplies, 90-94% power efficiency across all workloads 3.5W per 10 Gbps Port 14W per 40 Gbps Port First modular chassis without a mid-plane Unobstructed Front-to-Back airflow Nexus9508 Nexus9516 10 Gbps Ports 1152 2304 40 Gbps Ports 288 576
19
Nexus 9500 – Chassis and Line Card Options
36 ports 40G QSFP+ (Non Blocking) 40G Aggregation Non-ACI 48 ports 10G SFP+ & 4 ports 40G QSFP+ 48 ports 1/10G-T & 4 ports 40G QSFP+ (non blocking) 1/10G Access and 10/40G Aggregation ACI Access Ready Nexus 9508 13 RU high 30Tbps fabric today Up to 288p 40G & 1,152p 10G Headroom for 100G densities (connectors, power) Supervisors w/ quad core CPU and default 64GB SSD 36 ports 40G QSFP+ (Non Blocking) 40G Fabric Spine ACI Spine
20
Chassis Architecture – Density
17.5 in 30 in Maximum three chassis per rack Assuming 18KW per rack Up to 3,456 10G line rate ports per rack Up to G line rate ports per rack Designed for at least 2.5x speed increase in next gen ASICs 13 RU Front View
21
Chassis Design – Components
8 Line Card Slots Max 3.84 Tbps/Slot duplex Redundant Supervisor Engines 3 or 6 Fabric Modules (behind fan trays) 3 Fan Trays Redundant System Controller Cards No Mid-plane for LC to FM connectivity 3000W AC Power Supplies 2+0, 2+1, 2+2 Redundancy Support up to 8 Power supports Nexus 9508 Front View Nexus 9508 Rear View Designed for: Power Efficiency Cooling Efficiency Reliability Future Scale
22
Chassis – Power Supplies Units (PSU)
Single 20A input at 220V Support for range of international cabling options 92%+ Efficiency Range of PS configurations Minimum 1 PSU, Maximum 8 PSU (2) PSU for fully loaded chassis N+1 redundancy N+N grid redundancy 2x head room for future port densities, bandwidth, and optics 3000W AC PSU 80 Plus Platinum is equivalent to Climate Saver/ Green Grid Platinum rating
23
Chassis – Supervisor Modules
Redundant half-width supervisor engine Sandy Bridge, Quad Core, 1.8GHz 16GB Memory RAM upgradable to 64GB 64 GB SSD (default) Common for 4, 8 and 16 slot chassis Performance/ Scale Focused Range of Management Interfaces External Clock Input (Precision Time Protocol) Management Port (2) USB Ports Console Port
24
Chassis – System Controllers
Redundant half-width system controller Offload supervisor from switch “control plane” tasks Increased System Resiliency Increased Scale Common for 4, 8 and 16 slot chassis Performance/ Scale focused Dual Core ARM Processor, 1.3GHz Central Point of Chassis Control Ethernet Out-of-Band Channel (EOBC) switch between Supervisors and Line cards Ethernet Protocol Channel (EPC) switch 1Gbps switch for Intra-node Data Plane communication (Protocol Packets) Manages / Monitors Power Supplies via SMB (System Management Bus) Fan Trays
25
Nexus 9500 – Control Plane – Communications
The Nexus 9500 chassis has two communication channels connected through the SGMII (1Gbps) switches on System Controller Modules EOBC (Ethernet Out of Band Channel) EPC (Ethernet Protocol Channel) No dedicated direct path between I/O Modules and Supervisor Module EOBC Switch System Controller EPC Supervisor Fabric Cards NFE ALE 1G I/O Modules
26
Hardware Nexus 9500 – Control Plane – EOBC
Ethernet Out of Band Channel (EOBC) inter-connects all modules together through SGMII (1Gbs) switch that resides on System Controller (SC). The EOBC serves as normal control path. It also replaces the traditional System Management Bus (SMB) to simplify the system design. 48 e1/1 LC-1-ALE LC-1-NFE e2/1 LC-2-ALE LC-2 NFE e3/1 LC-3-ALE LC-3 NFE e8/1 LC-8-ALE LC-8- NFE FC-1 TR2-1 NFE FC-2 FC-3 NFE1 SUP-B SUP-A EOBC EOBC Switch on SC
27
Nexus 9500 – Control Plane – EPC
The Ethernet Protocol Channel (EPC) handles protocol packets between the Supervisor and Line Cards. Unlike EOBC, the EPC only connects supervisors and fabric modules through SGMII (1Gbs) switch. There is no dedicated direct-path between the Line Cards and the Supervisor modules. To send protocol packets to the Supervisors, the Line Cards utilize HiGig2 links to transfer packets to Fabric Modules first. Fabric Modules terminate those packets and re-direct it via EPC to Supervisor. 48 e1/1 LC-1 ALE LC-1 NFE e2/1 LC-2-ALE LC-2-NFE e3/1 LC-3-ALE LC-3-NFE e8/1 LC-8-ALE LC-8-NFE FC-1 TR2-1 NFE-2 NFE-1 FC-3 SUP-B SUP-A EPC Switch on System Controller FC-2 Hi-Gig2 Links Supervisor EPC Switch There is no dedicated direct path between the line cards and the supervisor modules so to process things like bpdus that comes from the line cards it goes through the FC then to the EPC on the SC then to the SUP to process. This is another functionaloity of the SC. Some additonal things to note on this slides. The NFE are the merchant ASICS – broadcom ALSE- app leaf enginer are trhe cisco asics So looking at the flow with this design we see that packing first gets processed by the NFE, then sent to the ALE the to the FC which also consists of the NFEs. The FC also consists of the NFE to swithc and route Fabric Module Line Cards
28
Hardware 8-Slot Modular Chassis Air Flow
Chassis is complete Front-to-Back Airflow Airflow direction is NOT Reversible Fan Trays are fully redundant Fan Trays must be removed in order to service Fabric Modules Designed for speed increase in multiple next gen ASICs Exhaust Air Intake Front View Rear View Fan Tray Removed Fabric Modules
29
Fabric Modules and Fan Trays
Up to 6 Fabric Modules Different cost points for 1/10G access and 40G aggregation Flexibility for future generation of fabric modules Quad Core ARM CPU 1.3 GHz for Supervisor offload Hot Swappable All Modules Forward Traffic Smooth degradation during replacement 3 Fan Trays (3) dual fans per tray Dynamic speed control driven by temperature sensors Straight Airflow across Line Cards and Fabric Modules N+1 Redundancy per Tray Fabric Module Fan Tray
30
Fabric Module – Data Plane Scaling for 8-Slot Chassis
A Fabric Module in an 8-Slot Chassis can provide up to 320Gbps to each Line Card slot. With 6 Fabric Modules, each Line Card slot can have up to 1.92Tbps forwarding bandwidth in both directions. NFE Fabric 1 320 Gbps (8x 40Gbps) NFE Fabric 2 320 Gbps (8x 40Gbps) NFE Fabric 3 320 Gbps (8x 40Gbps) NFE Fabric 4 320 Gbps (8x 40Gbps) NFE Fabric 5 320 Gbps (8x 40Gbps) NFE Fabric 6 320 Gbps (8x 40Gbps) 320 Gbps 640 Gbps 960 Gbps 1.28 Tbps 1.60 Tbps 1.92 Tbps Line Card Slot
31
Line Cards – Overview 40G Aggregation
36 ports 40G QSFP+ (Non Blocking) 40G Aggregation NXOS Only 48 ports 10G SFP+ & 4 ports 40G QSFP+ 48 ports 1/10G-T & 4 ports 40G QSFP+ (non blocking) 36 ports 40G QSFP+ ((1.5:1 oversubscribed) 1/10G Access and 10/40G Aggregation ACI Ready ACI Access Ready 36 ports 40G QSFP+ (Non Blocking) 40G Fabric Spine ACI Only
32
Fixed Switch Platform – Nexus 9300
Uplink Module Nexus 9396PQ 48 port 10G SFP+ & 12 port 40G QSFP+ 2 RU FAN1 V (650W AC) 12 port 40G QSFP+ Additional 40MB buffer Full VXLAN Bridging & Routing Capability Nexus 93128TX 96 port 1/10G-T & 8 port 40G QSFP+ 3 RU FAN2 V (800W AC), V (1200W AC) Nexus Common Redundant FAN (3) and Power Supply (2) Front-to-back and Back-to-Front airflow Dual or Quad Core CPU with default 64GB SDD
33
Optical Innovation – Removing 40G Barriers
Challenge 40G Optics are significant portion of CAPEX 40G Optics require new cabling Solution Re-use existing 10G MMF cabling infrastructure Re-use patch cables (same LC connector) Price comparable to 10G optics Cisco 40G SR-BiDi QSFP QSFP pluggable, MSA compliant Dual LC Connector Support for 100m on OM3 and 125m+ on OM4 Transmit/Receive on 2 wavelengths at 20G each Available end of CY13 and supported across all Cisco QSFP ports
34
Optics Support on the Nexus 9000 series
All optical interfaces are pluggable (MPO) 10G SFP Transceivers – SR, LR 10G Cables – Passive Copper, Active Optical 10G Fabric Extender Transceiver (FET) 40G QSFP Transceivers – SR4, CSR4, BiDi, LR4 40G Cables – Passive Copper, Active Optical 1G Transceivers – SM, MM, GLC-T
35
Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.