Download presentation
Presentation is loading. Please wait.
Published byElvin Hawkins Modified over 9 years ago
1
1 How High Performance Ethernet Plays in RONs, GigaPOPs & Grids Internet2 Member Meeting Sept 20, 2005 www.force10networks.com
2
2 Speakers Today Brian Savory –Southern Light Rail (SLR) - Exec. Director –Southern Crossroads (SOX) GigaPOP –Georgia Tech Matt Davy –Indiana GigaPOP - Chief Network Eng. –I-Light Optical Fiber Initiative –Indiana University & The Global NOC Debbie Montano –Force10 Networks, Dir. of Research & Education Alliances –dmontano@force10networks.comdmontano@force10networks.com
3
3 Ethernet has an Essential Role Even with optical networking & new network designs on the rise, Ethernet - GigE / 10 GigE - has an essential role to play in network architectures of: –Regional Optical Networks (RONs) –GigaPOPs (state/regional education advanced networks) –Grids/Clusters. High Performance Ethernet plays into new network designs and service definitions and offerings for RONs, GigaPOPs and Grids – will be discussed today. Ethernet is continuing to grow and take over! –LANs and campuses –Metro Ethernet –Ethernet on the Wide Area Networks (WAN), both for traditional carriers and R&E networks! –Grids / Clusters / Supercomputers –New support for: WAN PHY, High Densities (90-port card), etc.
4
4 Grid example: TeraGrid TeraGrid: Impressive National Resource 2/3rds (+) of TeraGrid computing facilities using high performance ethernet -- Force10 E-series switch/routers -- for interconnect to users/WAN.
5
5 Grids / Clusters System Interconnects –CPU-to-CPU: Inter-processor Communication (IPC) –Management Network –I/O to users, outside world (campus, LAN, WAN) –Storage, servers & storage subsystems IPC Interconnect Technology – GigE now #1 –Top 500 Supercomputers –Ethernet Rapid Growth –Favored in Clusters Other System Interconnection –Major reliance on Ethernet Type20042005 Ethernet35.2%42.4% Myrinet38.6%28.2% SP Switch9.2%9.0% NUM Alink3.4%4.2% Crossbar4.6%4.2% Proprietary3.4% Infiniband2.2%3.2% Quadrics4.0%2.6% Other2.8% -
6
6 CERN Deploys Force10 Announced yesterday – 9/19/2005 CERN will deploy the TeraScale E-Series family of switch/routers as the foundation of its new 2.4 Terabit per second (Tbps) high performance grid computing farm The TeraScale E-Series will connect more than 8,000 processors and storage devices Also provides the first intercontinental 10 Gigabit Ethernet WAN links in a production network Force10 TeraScale E-Series also will provide 10 Gigabit Ethernet connections to CERN's multiple campus-based experiments, which are similar to individual computing clusters CERN was looking for a partner that could provide and support a very high performance networking solution – selected Force10.
7
7 RONs, GigaPOPs & Ethernet Regional Optical Networks (RONs) growing –Using existing fiber or laying fiber –More ownership of fiber –More dedicated lambdas deployed –But that’s not the end of the story… Ethernet – 1 GigE / 10 GigE –Layer 2 service –Sub-divide and share those hefty lambdas –Access to multiple networks & projects on 1 lambda –Dynamic control of paths and interconnects GigaPOPs embracing more Ethernet / Layer 2 services
8
8 New Architectures: HOPI Abilene Network Abilene core router Force10 E600 Switch/Router NLR Optical Terminal Abilene Network NLR 10 GigE Lambda OPTICAL PACKET NLR Optical Terminal Optical Cross Connect 10 GigE Backbone Control Measurement Support OOB HOPI Node Regional Optical Network (RON) GigaPOP GigaPOP
9
9 Force10 Participation Internet2 HOPI Project HOPI - Hybrid Optical Packet Infrastructure Examining a hybrid of shared IP packet switching and dynamically provisioned optical lambdas Modeling scaleable next- generation networks Force10 Ethernet switch, 10 GigE ports Dynamic Control of Ethernet VLANs to allocate and configure network paths Internet2 Corporate Partner & HOPI project partner Five E600 switch/routers at HOPI nodes in Los Angeles, DC, Chicago, Seattle & New York
10
10 Force10 Networks, Inc Leaders in 10 GbE Switching & Routing Founded in 1999, Privately Held First to ship line-rate 10 GbE switching & routing Pioneered new switch/router architecture providing best-in-class resiliency and density, simplifying network topologies Customer base spans academic/research, data center, enterprise and service provider Fastest growing 10 GbE vendor, 92% in 1H04 April 2005: TeraScale E300 switch/router named winner of the Networking Infrastructure category for eWEEK's Fifth Annual Excellence Awards program.
11
11 TeraScale E-Series A New Generation of 10 Gigabit Ethernet Industry’s best density per chassis –672 LINE-RATE GbE ports –56 LINE-RATE 10 GbE ports Driving down price/port ! –First LINE-RATE 48-port GbE line card & LINE-RATE 4-port 10 GbE line card Industry’s best performance –First true Terabit/second switch/router processing 1 billion packets per second Industry’s most scalable resiliency –ZERO packet loss hitless failover at Terabit data rates Industry’s most scalable security –No performance degradation with 1 million ACLs Investment protection for 100 GbE E1200 E600 E300
12
12 Force10 Firsts… Apr 2002 Apr 2002 First Line-Rate 336 GbE Ports Demo First Line-Rate 336 GbE Ports Demo Nov 2003 Nov 2003 First Public Zero Packet Loss Hitless Failover Demo First Public Zero Packet Loss Hitless Failover Demo Jan 2002 Jan 2002 First Line-Rate 10 GbE System Shipped E1200 First Line-Rate 10 GbE System Shipped E1200 Oct 2002 Oct 2002 First Line-Rate 10 GbE Mid-Size System Shipped E600 First Line-Rate 10 GbE Mid-Size System Shipped E600 Nov 2003 Nov 2003 First Line-Rate 10 GbE Compact- Size System Shipped E300 First Line-Rate 10 GbE Compact- Size System Shipped E300 Sept 2004 Sept 2004 First Line-Rate 672 GbE / 56 – 10 GbE Ports First Line-Rate 672 GbE / 56 – 10 GbE Ports First 48 GbE x 10 GbE Purpose Built Data Center Switch First 48 GbE x 10 GbE Purpose Built Data Center Switch March 2005 March 2005 April 2005 April 2005 First >1200 GbE Ports Per Chassis First >1200 GbE Ports Per Chassis
13
13 Thank You
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.