SC|05 Bandwidth Challenge ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford.

Slides:



Advertisements
Similar presentations
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Advertisements

Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005.
1 Internet End-to-end Monitoring Project at SLAC Les Cottrell, Connie Logg, Jerrod Williams, Gary Buhrmaster Site visit to SLAC by DoE program managers.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
1 Testbeds Les Cottrell Site visit to SLAC by DoE program managers Thomas Ndousse & Mary Anne Scott April 27,
Stanford University, SLAC, NIIT, the Digital Divide & Bandwidth Challenge Prepared by Les Cottrell, SLAC for the NIIT, February 22, 2006.
Global Lambdas and Grids for Particle Physics in the LHC Era Harvey B. Newman Harvey B. Newman California Institute of Technology SC2005 Seattle, November.
ESLEA Bedfont Lakes Dec 04 Richard Hughes-Jones Network Measurement & Characterisation and the Challenge of SuperComputing SC200x.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
UltraLight: Network & Applications Research at UF Dimitri Bourilkov University of Florida CISCO - UF Collaborative Team Meeting Gainesville, FL, September.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
1 ESnet Network Measurements ESCC Feb Joe Metzger
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology State of the art in the use of long distance network International.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
Shawn McKee / University of Michigan USATLAS Tier1 & Tier2 Network Planning Meeting December 14, BNL UltraLight Overview.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Shawn McKee University of Michigan University of Michigan UltraLight: A Managed Network Infrastructure for HEP CHEP06, Mumbai, India February 14, 2006.
Data Logistics in Particle Physics Ready or Not, Here it Comes… Prof. Paul Sheldon Vanderbilt University Prof. Paul Sheldon Vanderbilt University.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
Performance and Scalability of xrootd Andrew Hanushevsky (SLAC), Wilko Kroeger (SLAC), Bill Weeks (SLAC), Fabrizio Furano (INFN/Padova), Gerardo Ganis.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
Ultimate Integration Joseph Lappa Pittsburgh Supercomputing Center ESCC/Internet2 Joint Techs Workshop.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
The Design and Demonstration of the UltraLight Network Testbed Presented by Xun Su GridNets 2006, Oct.
1 Achieving High Throughput on Fast Networks (Bandwidth Challenges and World Records) Les Cottrell & Yee-Ting Li Stanford Linear Accelerator Center Presented.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
Simple Infrastructure to Exploit 100G Wide Are Networks for Data-Intensive Science Shawn McKee / University of Michigan Supercomputing 2015 Austin, Texas.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
1 Experiences and results from implementing the QBone Scavenger Les Cottrell – SLAC Presented at the CENIC meeting, San Diego, May
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Tackling I/O Issues 1 David Race 16 March 2010.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Advanced Network Diagnostic Tools Richard Carlson EVN-NREN workshop.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Joint Genome Institute
UK GridPP Tier-1/A Centre at CLRC
Enabling High Speed Data Transfer in High Energy Physics
Wide Area Networking at SLAC, Feb ‘03
Experiences from SLAC SC2004 Bandwidth Challenge
Wide-Area Networking at SLAC
Presentation transcript:

SC|05 Bandwidth Challenge ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center

LHC Network Requirements CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1 Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center INFN Center RAL Center Institute Workstations ~ MBytes/sec ~10 Gbps 1 to 10 Gbps Tens of Petabytes by An Exabyte ~5-7 Years later. Physics data cache ~PByte/sec Gbps Tier2 Center ~1-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment

Overview  Bandwidth Challenge  ‘ The Bandwidth Challenge highlights the best and brightest in new techniques for creating and utilizing vast rivers of data that can be carried across advanced networks. ‘  Transfer as much data as possible using real applications over a 2 hour window  We did…  Distributed TeraByte Particle Physics Data Sample Analysis  ‘Demonstrated high speed transfers of particle physics data between host labs and collaborating institutes in the USA and worldwide. Using state of the art WAN infrastructure and Grid Web Services based on the LHC Tiered Architecture, they showed real-time particle event analysis requiring transfers of Terabyte-scale datasets.’  Bandwidth Challenge  ‘ The Bandwidth Challenge highlights the best and brightest in new techniques for creating and utilizing vast rivers of data that can be carried across advanced networks. ‘  Transfer as much data as possible using real applications over a 2 hour window  We did…  Distributed TeraByte Particle Physics Data Sample Analysis  ‘Demonstrated high speed transfers of particle physics data between host labs and collaborating institutes in the USA and worldwide. Using state of the art WAN infrastructure and Grid Web Services based on the LHC Tiered Architecture, they showed real-time particle event analysis requiring transfers of Terabyte-scale datasets.’

Overview  In detail, during the bandwidth challenge (2 hours):  131 Gbps measured by SCInet BWC team on 17 of our waves (15 minute average)  95.37TB of data transferred.  (3.8 DVD’s per second)  Gbps (peak 150.7Gbps)  On day of challenge  Transferred ~475TB ‘practising’ (waves were shared, still tuning applications and hardware)  Peak one way USN utlisation observed on a single link was 9.1Gbps (Caltech) and 8.4Gbps (SLAC)  Also wrote to StorCloud  SLAC: wrote 3.2TB in 1649 files during BWC  Caltech: 6GB/sec with 20 nodes  In detail, during the bandwidth challenge (2 hours):  131 Gbps measured by SCInet BWC team on 17 of our waves (15 minute average)  95.37TB of data transferred.  (3.8 DVD’s per second)  Gbps (peak 150.7Gbps)  On day of challenge  Transferred ~475TB ‘practising’ (waves were shared, still tuning applications and hardware)  Peak one way USN utlisation observed on a single link was 9.1Gbps (Caltech) and 8.4Gbps (SLAC)  Also wrote to StorCloud  SLAC: wrote 3.2TB in 1649 files during BWC  Caltech: 6GB/sec with 20 nodes

Participants  Caltech/HEP/CACR/ NetLab: Harvey Newman, Julian Bunn - Contact, Dan Nae, Sylvain Ravot, Conrad Steenberg, Yang Xia, Michael Thomas Caltech  SLAC/IEPM: Les Cottrell, Gary Buhrmaster, Yee-Ting Li, Connie Logg SLAC  FNAL Matt Crawford, Don Petravick, Vyto Grigaliunas, Dan Yocum FNAL  University of Michigan Shawn McKee, Andy Adamson, Roy Hockett, Bob Ball, Richard French, Dean Hildebrand, Erik Hofer, David Lee, Ali Lotia, Ted Hanss, Scott Gerstenberger University of Michigan  Caltech/HEP/CACR/ NetLab: Harvey Newman, Julian Bunn - Contact, Dan Nae, Sylvain Ravot, Conrad Steenberg, Yang Xia, Michael Thomas Caltech  SLAC/IEPM: Les Cottrell, Gary Buhrmaster, Yee-Ting Li, Connie Logg SLAC  FNAL Matt Crawford, Don Petravick, Vyto Grigaliunas, Dan Yocum FNAL  University of Michigan Shawn McKee, Andy Adamson, Roy Hockett, Bob Ball, Richard French, Dean Hildebrand, Erik Hofer, David Lee, Ali Lotia, Ted Hanss, Scott Gerstenberger University of Michigan  U Florida Paul Avery, Dimitri Bourilkov, U Florida  University of Manchester: Richard Hughes-Jones ・ University of Manchester  CERN, Switzerland David Foster CERN, Switzerland  KAIST, Korea Yusung Kim, KAIST, Korea  Kyungpook Univserity, Korea, Kihwan Kwon, Kyungpook Univserity, Korea  UERJ, Brazil Alberto Santoro, UERJ, Brazil  UNESP, Brazil Sergio Novaes, UNESP, Brazil  USP, Brazil Luis Fernandez Lopez USP, Brazil  GLORIAD, USA: Greg Cole, Natasha Bulashova GLORIAD  U Florida Paul Avery, Dimitri Bourilkov, U Florida  University of Manchester: Richard Hughes-Jones ・ University of Manchester  CERN, Switzerland David Foster CERN, Switzerland  KAIST, Korea Yusung Kim, KAIST, Korea  Kyungpook Univserity, Korea, Kihwan Kwon, Kyungpook Univserity, Korea  UERJ, Brazil Alberto Santoro, UERJ, Brazil  UNESP, Brazil Sergio Novaes, UNESP, Brazil  USP, Brazil Luis Fernandez Lopez USP, Brazil  GLORIAD, USA: Greg Cole, Natasha Bulashova GLORIAD

Networking Overview  We had 22 10Gbits/s waves to the Caltech and SLAC/FNAL booths. Of these:  15 waves to the Caltech booth (from Florida (1), Korea/GLORIAD (1), Brazil (1 * 2.5Gbits/s), Caltech (2), LA (2), UCSD, CERN (2), U Michigan (3), FNAL(2)).  7 x 10Gbits/s waves to the SLAC/FNAL booth (2 from SLAC, 1 from the UK, and 4 from FNAL).  The waves were provided by Abilene, Canarie, Cisco (5), ESnet (3), GLORIAD (1), HOPI (1), Michigan Light Rail (MiLR), National Lambda Rail (NLR), TeraGrid (3) and UltraScienceNet (4).  We had 22 10Gbits/s waves to the Caltech and SLAC/FNAL booths. Of these:  15 waves to the Caltech booth (from Florida (1), Korea/GLORIAD (1), Brazil (1 * 2.5Gbits/s), Caltech (2), LA (2), UCSD, CERN (2), U Michigan (3), FNAL(2)).  7 x 10Gbits/s waves to the SLAC/FNAL booth (2 from SLAC, 1 from the UK, and 4 from FNAL).  The waves were provided by Abilene, Canarie, Cisco (5), ESnet (3), GLORIAD (1), HOPI (1), Michigan Light Rail (MiLR), National Lambda Rail (NLR), TeraGrid (3) and UltraScienceNet (4).

Network Overview

Hardware (SLAC only)  At SLAC:  14 x 1.8Ghz Sun v20z (Dual Opteron)  2 x Sun 3500 Disk trays (2TB of storage)  12 x Chelsio T110 10Gb NICs (LR)  2 x Neterion/S2io Xframe I (SR)  Dedicated Cisco 6509 with 4 x 4x10GB blades  At SC|05:  14 x 2.6Ghz Sun v20z (Dual Opteron)  10 QLogic HBA’s for StorCloud Access  50TB Storage at SC|05 provide by 3PAR (Shared with Caltech)  12 x Neterion/S2io Xframe I NICs (SR)  2 x Chelsio T110 NICs (LR)  Shared Cisco 6509 with 6 x 4x10GB blades  At SLAC:  14 x 1.8Ghz Sun v20z (Dual Opteron)  2 x Sun 3500 Disk trays (2TB of storage)  12 x Chelsio T110 10Gb NICs (LR)  2 x Neterion/S2io Xframe I (SR)  Dedicated Cisco 6509 with 4 x 4x10GB blades  At SC|05:  14 x 2.6Ghz Sun v20z (Dual Opteron)  10 QLogic HBA’s for StorCloud Access  50TB Storage at SC|05 provide by 3PAR (Shared with Caltech)  12 x Neterion/S2io Xframe I NICs (SR)  2 x Chelsio T110 NICs (LR)  Shared Cisco 6509 with 6 x 4x10GB blades

Hardware at SC|05

Software  BBCP ‘Babar File Copy’  Uses ‘ssh’ for authentication  Multiple stream capable  Features ‘rate synchronisation’ to reduce byte retransmissions  Sustained over 9Gbps on a single session  XrootD  Library for transparent file access (standard unix file functions)  Designed primarily for LAN access (transaction based protocol)  Managed over 35Gbit/sec (in two directions) on 2 x 10Gbps waves  Transferred 18TBytes in 257,913 files  DCache  20Gbps production and test cluster traffic  BBCP ‘Babar File Copy’  Uses ‘ssh’ for authentication  Multiple stream capable  Features ‘rate synchronisation’ to reduce byte retransmissions  Sustained over 9Gbps on a single session  XrootD  Library for transparent file access (standard unix file functions)  Designed primarily for LAN access (transaction based protocol)  Managed over 35Gbit/sec (in two directions) on 2 x 10Gbps waves  Transferred 18TBytes in 257,913 files  DCache  20Gbps production and test cluster traffic

Last year (SC|04) BWC Aggregate Bandwidth

Cumulative Data Transferred Bandwidth Challenge period

Component Traffic

SLAC-ESnet FermiLab-HOPI SLAC-ESnet-USNFNAL-UltraLight UKLight Out from booth SLAC-FermiLab-UK Bandwidth Contributions In to booth

In to booth Out from booth ESnet routed ESnet SDN layer 2 via USN Bandwidth Challenge period SLAC Cluster Contributions

SLAC/FNAL Booth Aggregate Mbps Waves

Problems…  Managerial/PR  Initial request for loan hardware took place 6 months in advance!  Lots and lots of paperwork to keep account of all loan equipment  Logistical  Set up and tore down a pseudo production network and servers in a space of week!  Testing could not begin until waves were alight  Most waves lit day before challenge!  Shipping so much hardware not cheap!  Setting up monitoring  Managerial/PR  Initial request for loan hardware took place 6 months in advance!  Lots and lots of paperwork to keep account of all loan equipment  Logistical  Set up and tore down a pseudo production network and servers in a space of week!  Testing could not begin until waves were alight  Most waves lit day before challenge!  Shipping so much hardware not cheap!  Setting up monitoring

Problems…  Tried to configure hardware and software prior to show  Hardware  NICS  We had 3 bad Chelsios (bad memory)  Xframe II’s did not work in UKLight’s Boston machines  Hard-disks  3 dead 10K disks (had to ship in spare)  1 x 4Port 10Gb blade DOA  MTU mismatch between domains  Router blade died during stress testing day before BWC!  Cables! Cables! Cables!  Software  Used golden disks for duplication (still takes 30 minutes per disk to replicate!)  Linux kernels:  Initially used , found sever performance problems compared to  (New) Router firmware caused crashes under heavy load  Unfortunately, only discovered just before BWC  Had to manually restart the affected ports during BWC  Tried to configure hardware and software prior to show  Hardware  NICS  We had 3 bad Chelsios (bad memory)  Xframe II’s did not work in UKLight’s Boston machines  Hard-disks  3 dead 10K disks (had to ship in spare)  1 x 4Port 10Gb blade DOA  MTU mismatch between domains  Router blade died during stress testing day before BWC!  Cables! Cables! Cables!  Software  Used golden disks for duplication (still takes 30 minutes per disk to replicate!)  Linux kernels:  Initially used , found sever performance problems compared to  (New) Router firmware caused crashes under heavy load  Unfortunately, only discovered just before BWC  Had to manually restart the affected ports during BWC

Problems  Most transfers were from memory to memory (Ramdisk etc).  Local caching of (small) files in memory  Reading and writing to disk will be the next bottleneck to overcome  Most transfers were from memory to memory (Ramdisk etc).  Local caching of (small) files in memory  Reading and writing to disk will be the next bottleneck to overcome

Conclusion  Previewed the IT Challenges of the next generation Data Intensive Science Applications (High Energy Physics, astronomy etc)  Petabyte-scale datasets  Tens of national and transoceanic links at 10 Gbps (and up)  100+ Gbps aggregate data transport sustained for hours; We reached a Petabyte/day transport rate for real physics data  Learned to gauge difficulty of the global networks and transport systems required for the LHC mission  Set up, shook down and successfully ran the systems in < 1 week  Understood and optimized the configurations of various components (Network interfaces, router/switches, OS, TCP kernels, applications) for high performance over the wide area network.  Previewed the IT Challenges of the next generation Data Intensive Science Applications (High Energy Physics, astronomy etc)  Petabyte-scale datasets  Tens of national and transoceanic links at 10 Gbps (and up)  100+ Gbps aggregate data transport sustained for hours; We reached a Petabyte/day transport rate for real physics data  Learned to gauge difficulty of the global networks and transport systems required for the LHC mission  Set up, shook down and successfully ran the systems in < 1 week  Understood and optimized the configurations of various components (Network interfaces, router/switches, OS, TCP kernels, applications) for high performance over the wide area network.

Conclusion  Products from this the exercise  An optimized Linux ( NFSv4 + FAST and other TCP stacks) kernel for data transport; after 7 full kernel-build cycles in 4 days  A newly optimized application-level copy program, bbcp, that matches the performance of iperf under some conditions.  Extensions of Xrootd, an optimized low-latency file access application for clusters, across the wide area  Understanding of the limits of 10 Gbps-capable systems under stress.  How to effectively utilize 10GE and 1GE connected systems to drive 10 gigabit wavelengths in both directions.  Use of production and test clusters at FNAL reaching more than 20 Gbps of network throughput.  Significant efforts remain from the perspective of high-energy physics  Management, integration and optimization of network resources  End-to-end capabilities able to utilize these network resources. This includes applications and IO devices (disk and storage systems)  Products from this the exercise  An optimized Linux ( NFSv4 + FAST and other TCP stacks) kernel for data transport; after 7 full kernel-build cycles in 4 days  A newly optimized application-level copy program, bbcp, that matches the performance of iperf under some conditions.  Extensions of Xrootd, an optimized low-latency file access application for clusters, across the wide area  Understanding of the limits of 10 Gbps-capable systems under stress.  How to effectively utilize 10GE and 1GE connected systems to drive 10 gigabit wavelengths in both directions.  Use of production and test clusters at FNAL reaching more than 20 Gbps of network throughput.  Significant efforts remain from the perspective of high-energy physics  Management, integration and optimization of network resources  End-to-end capabilities able to utilize these network resources. This includes applications and IO devices (disk and storage systems)

Press and PR  11/8/05 - Brit Boffins aim to Beat LAN speed record from vnunet.comBrit Boffins aim to Beat LAN speed record  SC|05 Bandwidth Challenge SLAC Interaction Point. SC|05 Bandwidth Challenge  Top Researchers, Projects in High Performance Computing Honored at SC/05... Business Wire (press release) - San Francisco, CA, USA Top Researchers, Projects in High Performance Computing Honored at SC/05...  11/18/05 - Official Winner AnnouncementOfficial Winner Announcement  11/18/05 - SC|05 Bandwidth Challenge Slide PresentationSC|05 Bandwidth Challenge Slide Presentation  11/23/05 - Bandwidth Challenge Results from SlashdotBandwidth Challenge Results  12/6/05 - Caltech press releaseCaltech press release  12/6/05 - Neterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference CCN Matthews News Distribution ExpertsNeterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference  High energy physics team captures network prize at SC|05 from SLAC High energy physics team captures network prize at SC|05  High energy physics team captures network prize at SC|05 EurekaAlert! High energy physics team captures network prize at SC|05  12/7/05 - High Energy Physics Team Smashes Network Record, from Science Grid this Week.High Energy Physics Team Smashes Network Record  Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005, from Neterion. Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005  11/8/05 - Brit Boffins aim to Beat LAN speed record from vnunet.comBrit Boffins aim to Beat LAN speed record  SC|05 Bandwidth Challenge SLAC Interaction Point. SC|05 Bandwidth Challenge  Top Researchers, Projects in High Performance Computing Honored at SC/05... Business Wire (press release) - San Francisco, CA, USA Top Researchers, Projects in High Performance Computing Honored at SC/05...  11/18/05 - Official Winner AnnouncementOfficial Winner Announcement  11/18/05 - SC|05 Bandwidth Challenge Slide PresentationSC|05 Bandwidth Challenge Slide Presentation  11/23/05 - Bandwidth Challenge Results from SlashdotBandwidth Challenge Results  12/6/05 - Caltech press releaseCaltech press release  12/6/05 - Neterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference CCN Matthews News Distribution ExpertsNeterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference  High energy physics team captures network prize at SC|05 from SLAC High energy physics team captures network prize at SC|05  High energy physics team captures network prize at SC|05 EurekaAlert! High energy physics team captures network prize at SC|05  12/7/05 - High Energy Physics Team Smashes Network Record, from Science Grid this Week.High Energy Physics Team Smashes Network Record  Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005, from Neterion. Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005

SLAC/UK Contribution ESnet/USN layer 2 UKLight In to booth Out from booth ESnet routed

SLAC/Esnet Contribution Mbps Hosts Aggregate

HOPI USN FermiLab Contribution UltraLight