Download presentation
Presentation is loading. Please wait.
Published byAnastasia Hardy Modified over 8 years ago
1
Copyright 2005, Force10 Networks, Inc 1 Resiliency Joint Techs Workshop July 19, 2005 - Vancouver, BC Debbie Montano Dir. of Research & Education Alliances dmontano@force10networks.com
2
Copyright 2005, Force10 Networks, Inc 2 Agenda Who is Force10? Resiliency: –Reliability –Stability –Security –Fault Tolerance –High Availability
3
Copyright 2005, Force10 Networks, Inc 3 What is Force10 about? Innovation –ASICs, Back Plane, 3-CPU architecture, hot-lock ACLs,... Simplicity –Easier network designs, predictable performance, hot- swap of components, DOS resilient, hitless failover, one software train … Reliability –Distributed forwarding, fault isolation, ECC protected memory, modular software design, separation of control and data plane, automated testing, … Leadership Lowering TCO Peace of mind
4
Copyright 2005, Force10 Networks, Inc 4 Supporting the Community Internet2 Partner –I2 HOPI project Supporting SC|05 –Scinet and Bandwidth Challenge –Supported SCxy for many years Supporting iGrid and other events Engaging with the Quilt (more soon) Many R&E customers around the globe: –universities, energy sciences labs, supercomputing centers, research networks, exchanges, regional optical networks, gigaPOPs, etc., etc.
5
Copyright 2005, Force10 Networks, Inc 5 Force10 Networks, Inc Leaders in 10 GbE Switching & Routing Founded in 1999, Privately Held First to ship line-rate 10 GbE switching & routing Pioneered new switch/router architecture providing best-in-class resiliency and density, simplifying network topologies Customer base spans academic/research, data center, enterprise and service provider Fastest growing 10 GbE vendor April 2005: TeraScale E300 switch/router named winner of the Networking Infrastructure category for eWEEK's Fifth Annual Excellence Awards program.
6
Copyright 2005, Force10 Networks, Inc 6 Force10 Participation Internet2 HOPI Project HOPI - Hybrid Optical Packet Infrastructure Fundamental Questions: How will the core Internet architecture evolve? What should the next generation Internet2 network infrastructure be? Examining a hybrid of shared IP packet switching and dynamically provisioned optical lambdas Modeling scaleable next- generation networks Internet2 Corporate Partner & HOPI project partner Providing five E600 switch/routers, being deployed in Los Angeles, DC, Chicago, Seattle & New York
7
Copyright 2005, Force10 Networks, Inc 7 Internet2 HOPI Project
8
Copyright 2005, Force10 Networks, Inc 8 Hybrid Optical Packet Infrastructure (HOPI) Node Abilene Network Abilene core router Force10 E600 Switch/Router NLR Optical Terminal Abilene Network NLR 10 GigE Lambda OPTICAL PACKET NLR Optical Terminal Optical Cross Connect 10 GigE Backbone Control Measurement Support OOB HOPI Node Regional Optical Network (RON) GigaPOP GigaPOP
9
Copyright 2005, Force10 Networks, Inc 9 Force10 Firsts… Apr 2002 Apr 2002 First Line-Rate 336 GbE Ports Demo First Line-Rate 336 GbE Ports Demo Nov 2003 Nov 2003 First Public Zero Packet Loss Hitless Failover Demo First Public Zero Packet Loss Hitless Failover Demo Jan 2002 Jan 2002 First Line-Rate 10 GbE System Shipped E1200 First Line-Rate 10 GbE System Shipped E1200 Oct 2002 Oct 2002 First Line-Rate 10 GbE Mid-Size System Shipped E600 First Line-Rate 10 GbE Mid-Size System Shipped E600 Nov 2003 Nov 2003 First Line-Rate 10 GbE Compact- Size System Shipped E300 First Line-Rate 10 GbE Compact- Size System Shipped E300 Sept 2004 Sept 2004 First Line-Rate 672 GbE / 56 – 10 GbE Ports First Line-Rate 672 GbE / 56 – 10 GbE Ports First 48 GbE x 10 GbE Purpose Built Data Center Switch First 48 GbE x 10 GbE Purpose Built Data Center Switch March 2005 March 2005 April 2005 April 2005 First >1200 GbE Ports Per Chassis First >1200 GbE Ports Per Chassis
10
Copyright 2005, Force10 Networks, Inc 10 E1200 E600 E300 Capacity 1.68 Tbps, 1 Bpps 900 Gbps, 500 Mpps 400 Gbps, 196 Mpps Size 21 Rack Units 2 Units/Rack 16 Rack Units 3 Units/Rack 8 Rack Units 6 Units/Rack Slots 14 Line Cards7 Line Cards6 Line Cards Line cards 10GE 1 GbE SFP 1 GbE 10/100/1000 TeraScale E-Series Chassis-based 10 GbE Switch/Router Family
11
Copyright 2005, Force10 Networks, Inc 11 E1200 E600 E300 High-Density GigE Ports 1260630288 Line-Rate GigE Ports 672336132 Line-Rate 10 GigE Ports 562812 TeraScale E-Series Chassis-based 10 GbE Switch/Router Family Highest Density GigE and 10 GigE
12
Copyright 2005, Force10 Networks, Inc 12 Performance & capacity to scale –Switching capacity of 192 Gbps, 20% more than competitive switches –Stack up to eight S50s in a virtual switch to simplify management Core-like resiliency –Resiliency feature protects against stack breaks –Advanced link aggregation features FRONT VIEW REAR VIEW 2x10GbE XFP Module 2 Stacking Ports Redundant Power Supply Connector Slot AC Power Supply inlet Force10 S50 Switch Designed for High Performance Data Centers
13
Copyright 2005, Force10 Networks, Inc 13 Top 500: Force10 List June 2005 2005Customer 5Barcelona Supercomputing Center (BSC) 20NCSA - Tungsten 24European Centre for Medium-Range Weather Forecasts 25European Centre for Medium-Range Weather Forecasts 38NCSA Teragrid 46Grid Technology Research Center, AIST 47NCSA - Tungsten2 53Brigham Young University - Marylou4 54University of Oklahoma - Topdawg 58Argonne National Labs 63San Diego Supercomputing Center 74TACC / Texas Advanced Computing Center 94Petroleum Geo Services (PGS) 2005Customer 108SDSC TeraGrid 129UT SimCenter at Chattanooga 168Grid Technology Research Center, AIST 135Petroleum Geo Services (PGS) 200SUNY at Buffalo 203Grid Technology Research Center, AIST 300Veritas DGC 326Cornell Theory Center 449University of Liverpool 499Doshisha University Force10 has 23 in the Top 500 list 5 more than last year
14
Copyright 2005, Force10 Networks, Inc 14 Top 500: Interconnect of Choice Type20042005 Ethernet35.2%42.4% Myrinet38.6%28.2% SP Switch9.2%9.0% NUM Alink3.4%4.2% Crossbar4.6%4.2% Proprietary3.4% Infiniband2.2%3.2% Quadrics4.0%2.6% Other2.8% - Ethernet is the only inter-connect technology that has made substantial gains Myrinet is down by over 10% Infiniband has negligible gains
15
Copyright 2005, Force10 Networks, Inc 15 Resiliency What is it? –Ability to recover readily, bounce back –Fault Tolerant –Self Healing Why should you care? –Lots of things attack and stress your switches/routers –Some malicious & some not, many outside your control –Need your network to continue running smoothly –Reliability & Security How does one achieve resiliency? –Stay tuned…
16
Copyright 2005, Force10 Networks, Inc 16 Route Processor Module – 3 CPUs
17
Copyright 2005, Force10 Networks, Inc 17 RPM: 3 CPUs – Resiliency Router Processor Module (RPM) –Handles all route & control processing –Optional Redundant RPM 3 independent CPUs per RPM –1 for: Switching (Layer 2) processes –1 for: Routing (Layer 3) processes –1 for: Local control & management –Process isolation with memory protection Won’t have to reboot for: –Spanning tree loops creating Layer 2 MAC address floods –Route flaps –Distributed Denial of Services (DDoS) attacks
18
Copyright 2005, Force10 Networks, Inc 18 Control Packet Rate Limiting Denial of Service (DoS) attacks –Malicious attack designed to bring network to its knees –Flood system with useless traffic designed as control plane packets –Target control plane CPU – can overwhelm any CPU –Problem worse with 10 GigE links – more traffic! Force10 Defense –Rate limit traffic to control plane CPUs –Queue & prioritize control plane messages –Throttle control plane when CPU utilization > 85% –With Access Control Lists (ACLs), can rate limit only specific traffic types, e.g. ICMP. –Ensure critical control messages get through
19
Copyright 2005, Force10 Networks, Inc 19 ACLs Applied to Control Packets Access Control Lists (ACLs) –Extensive ACLs can be applied to incoming control packets –Line Rate ACLs –No additional Latency – helps reduce overall route table convergence time Fine Tune Packet Classification & Control Mechanisms
20
Copyright 2005, Force10 Networks, Inc 20 Size Product ACL Security Filters Per Chassis Source 1 RackCRS-1 Core Router 1/2 RackForce10 E1200 Switch/Router 1+ Million Tolly Verified Extreme BD 10K128kExtreme Web site Juniper T640 Core Router 1/3 RackForce10 E600 Switch/Router500+kBased on Tolly Foundry MG8 / 40G40kFoundry Web site Cisco Catalyst Sup720 / 6509288kCisco Web site 1/6 RackForce10 E300 Switch/Router240+kBased on Tolly Scalable Security
21
Copyright 2005, Force10 Networks, Inc 21 Hot-Lock TM ACL Technology Must update Access Control Lists (ACLs) frequently –For comprehensive security –To prevent newly discovered or pending attacks If the ACL updates open the gates, intruders with sophisticated port scanning technologies can enter your network while the security holes are open Millions of packets could pass unchecked into you network.
22
Copyright 2005, Force10 Networks, Inc 22 2-Step ACL Update Competing vendors use 2-step ACL update procedure –Creates security hole during the update –Higher speed interfaces, greater the risk
23
Copyright 2005, Force10 Networks, Inc 23 1-Step ACL Update Force10 uses 1-step ACL update –Hot-Lock avoids removing ACL from the interface prior to ACL modification action –No security hole during ACL updates –No disruption of traffic during ACL updates
24
Copyright 2005, Force10 Networks, Inc 24 Tolly Tested Hitless Route Processor Module (RPM) Failover –From working to redundant RPM –E1200, 56 x 10 GigE ports –Snake confirguration –Throughput tests at various frame sizes (64, 1518 & 9252 bytes) –Issued “redundant force-fail RPM” 1 minute into tests –Line Rate Throughput, Zero Frame Loss, at any frame size
25
Copyright 2005, Force10 Networks, Inc 25 Hitless Technology Claims Hitless Layer 2 and Layer 3 failover. No public demo with line rate traffic and Zero packet loss Claims Hitless Layer 2 and Layer 3 failover No switch fabric redundancy. Switch fabric and Management module combined in one card. Cannot claim zero-packet loss for line rate all ports failover. Claims Hitless Layer 2 and Layer 3 failover. Reboots linecards during management card failover. No switch fabric redundancy. Hitless Layer 2 and Layer 3 failover ZERO packet loss hitless failover of Route Processor module demonstrated in a public show (SC2003) with layer 2 and layer 3 (BGP, OSPF) traffic.
26
Copyright 2005, Force10 Networks, Inc 26 Tolly Tested Hitless Switch Fabric Module (SFM) Failover –Supports 100% of line-rate zero-loss throughput when tested across 56 10-Gigabit Ethernet ports during a Switch Fabric Module failover, while passing over 1 Terabits per second of traffic. –Recovers from link outages in less than 2 milliseconds with a single Layer 2 flow, and less than 1 millisecond with 16 million Layer 3 flows, both well below the failover time usually reserved for SONTET/SDH links. –Maintains all BGP, OSPF and Telnet sessions even when hammered by a multiheaded Denial of Service attack. –Relies upon QoS facilities to ensure voice, video and data traffic types are handled according to policy parameters and with respect to latency sensitivity.
27
Copyright 2005, Force10 Networks, Inc 27 Thank You Debbie Montano Director of Research & Education Alliances dmontano@force10networks.com www.force10networks.com
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.