Shawn McKee / University of Michigan USATLAS Tier1 & Tier2 Network Planning Meeting December 14, 2005 - BNL UltraLight Overview.

Slides:



Advertisements
Similar presentations
Network Resource Broker for IPTV in Cloud Computing Lei Liang, Dan He University of Surrey, UK OGF 27, G2C Workshop 15 Oct 2009 Banff,
Advertisements

All rights reserved © 2006, Alcatel Grid Standardization & ETSI (May 2006) B. Berde, Alcatel R & I.
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Shawn P. McKee University of Michigan University of Michigan UltraLight Meeting, NSF January 26, 2005 Network Working Group Report.
TeraPaths: End-to-End Network Path QoS Configuration Using Cross-Domain Reservation Negotiation Bruce Gibbard Dimitrios Katramatos Shawn McKee Dantong.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Lawrence G. Roberts CEO Anagran September 2005 Advances Toward Economic and Efficient Terabit LANs and WANs.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Is Lambda Switching Likely for Applications? Tom Lehman USC/Information Sciences Institute December 2001.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology State of the art in the use of long distance network International.
TeraPaths TeraPaths: establishing end-to-end QoS paths - the user perspective Presented by Presented by Dimitrios Katramatos, BNL Dimitrios Katramatos,
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
Cloud Computing in NASA Missions Dan Whorton CTO, Stinger Ghaffarian Technologies June 25, 2010 All material in RED will be updated.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
The Singapore Advanced Research & Education Network.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
Rick Summerhill Chief Technology Officer, Internet2 Internet2 Fall Member Meeting 9 October 2007 San Diego, CA The Dynamic Circuit.
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Shawn P. McKee / University of Michigan International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science May 25, 2005.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Shawn McKee University of Michigan University of Michigan UltraLight: A Managed Network Infrastructure for HEP CHEP06, Mumbai, India February 14, 2006.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
TeraPaths TeraPaths: Establishing End-to-End QoS Paths through L2 and L3 WAN Connections Presented by Presented by Dimitrios Katramatos, BNL Dimitrios.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
Five Essential Elements for Future Regional Optical Networks Harold Snow Sr. Systems Architect, CTO Group.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Rick Cavanaugh, University of Florida CHEP06 Mumbai, 13 February, 2006 An Ultrascale Information Facility for Data Intensive Research.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team CHEP 06.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
VMware vSphere Configuration and Management v6
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
The Design and Demonstration of the UltraLight Network Testbed Presented by Xun Su GridNets 2006, Oct.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
Tackling I/O Issues 1 David Race 16 March 2010.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
S. Ravot, J. Bunn, H. Newman, Y. Xia, D. Nae California Institute of Technology CHEP 2004 Network Session September 1, 2004 Breaking the 1 GByte/sec Barrier?
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
1 Network related topics Bartosz Belter, Wojbor Bogacki, Marcin Garstka, Maciej Głowiak, Radosław Krzywania, Roman Łapacz FABRIC meeting Poznań, 25 September.
Fall 2005 Internet2 Member Meeting International Task Force Julio Ibarra, PI Heidi Alvarez, Co-PI Chip Cox, Co-PI John Silvester, Co-PI September 19, 2005.
US ATLAS Tier-2 Networking Shawn McKee University of Michigan US ATLAS Tier-2 Meeting San Diego, March 8 th, 2007.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
100G R&D at Fermilab Gabriele Garzoglio (for the High Throughput Data Program team) Grid and Cloud Computing Department Computing Sector, Fermilab Overview.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
Joint Genome Institute
Grid Optical Burst Switched Networks
“A Data Movement Service for the LHC”
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
UltraLight Status Report
ExaO: Software Defined Data Distribution for Exascale Sciences
The UltraLight Program
Presentation transcript:

Shawn McKee / University of Michigan USATLAS Tier1 & Tier2 Network Planning Meeting December 14, BNL UltraLight Overview

The UltraLight Project UltraLight is A four year $2M NSF ITR funded by MPS. A four year $2M NSF ITR funded by MPS. Application driven Network R&D. Application driven Network R&D. A collaboration of BNL, Caltech, CERN, Florida, FIU, FNAL, Internet2, Michigan, MIT, SLAC. A collaboration of BNL, Caltech, CERN, Florida, FIU, FNAL, Internet2, Michigan, MIT, SLAC. Significant international participation: Brazil, Japan, Korea amongst many others. Significant international participation: Brazil, Japan, Korea amongst many others. Goal: Enable the network as a managed resource. Meta-Goal: Enable physics analysis and discoveries which could not otherwise be achieved.

UltraLight Backbone UltraLight has a non-standard core network with dynamic links and varying bandwidth inter-connecting our nodes.  Optical Hybrid Global Network The core of UltraLight is dynamically evolving as function of available resources on other backbones such as NLR, HOPI, Abilene or ESnet. The main resources for UltraLight: LHCnet (IP, L2VPN, CCC) LHCnet (IP, L2VPN, CCC) Abilene (IP, L2VPN) Abilene (IP, L2VPN) ESnet (IP, L2VPN) ESnet (IP, L2VPN) Cisco NLR wave (Ethernet) Cisco NLR wave (Ethernet) Cisco Layer 3 10GE Network Cisco Layer 3 10GE Network HOPI NLR waves (Ethernet; provisioned on demand) HOPI NLR waves (Ethernet; provisioned on demand) UltraLight nodes: Caltech, SLAC, FNAL, UF, UM, StarLight, CENIC PoP at LA, CERN UltraLight nodes: Caltech, SLAC, FNAL, UF, UM, StarLight, CENIC PoP at LA, CERN

UltraLight Layer1/2 Connectivity UL Layer 1/2 network (Courtesy Dan Nae)

UltraLight Layer3 Connectivity Shown (courtesy of Dan Nae) is the current UltraLight Layer 3 connectivity as of Mid-October 2005

UltraLight Network Usage

UltraLight Sites UltraLight currently has 10 participating core sites (shown alphabetically) Details and diagrams for each site will be reported Tuesday during “Network” day SiteMonitorTypeStorage Out of Band BNLMonalisaOC48, 10 GE 06 TBDY CaltechMonalisa 10 GE 1 TB Y CERNMonalisaOC192 9 TB Y FIUMonalisa 10 GE TBDY FNAL? TBDY I2?MPLSL2VPNTBDY MITMonalisaOC48TBDY SLACMonalisaIEPM TBDY UF? 1 TB Y UMMonalisa 10 GE 9 TB Y

Implementation via “sharing” with HOPI/NLR Also LA-CHI Cisco/NLR Research Wave DOE UltraScienceNet Wave SNV-CHI (LambdaStation) Connectivity to FLR to be determined MIT involvement welcome, but unfunded AMPATH UERJ, USP UltraLight Network: PHASE I Plans for Phase I from Oct. 2004

Move toward multiple “lambdas” Bring in FLR, as well as BNL (and MIT) General comment: We are almost here! AMPATH UERJ, USP UltraLight Network: PHASE II

Move into production Optical switching fully enabled amongst primary sites Integrated international infrastructure Certainly reasonable sometime in the next few years… AMPATH UERJ, USP UltraLight Network: PHASE III

UltraLight envisions a 4 year program to deliver a new, high-performance, network-integrated infrastructure: Phase I will last 12 months and focus on deploying the initial network infrastructure and bringing up first services Phase II will last 18 months and concentrate on implementing all the needed services and extending the infrastructure to additional sites Phase III will complete UltraLight and last 18 months. The focus will be on a transition to production in support of LHC Physics (+ eVLBI Astronomy, + ?) Workplan/Phased Deployment We are HERE!

  GOAL: Determine an effective mix of bandwidth-management techniques for this application-space, particularly: Best-effort/“scavenger” using effective ultrascale protocols Best-effort/“scavenger” using effective ultrascale protocols MPLS with QOS-enabled packet switching MPLS with QOS-enabled packet switching Dedicated paths arranged with TL1 commands, GMPLS Dedicated paths arranged with TL1 commands, GMPLS  PLAN: Develop, Test the most cost-effective integrated combination of network technologies on our unique testbed: 1. Exercise UltraLight applications on NLR, Abilene and campus networks, as well as LHCNet, and our international partners Progressively enhance Abilene with QOS support to protect production traffic Progressively enhance Abilene with QOS support to protect production traffic Incorporate emerging NLR and RON-based lightpath and lambda facilities Incorporate emerging NLR and RON-based lightpath and lambda facilities 2. Deploy and systematically study ultrascale protocol stacks (such as FAST) addressing issues of performance & fairness 3. Use MPLS/QoS and other forms of BW management, and adjustments of optical paths, to optimize end-to-end performance among a set of virtualized disk servers UltraLight Network Engineering

UltraLight: Effective Protocols The protocols used to reliably move data are a critical component of Physics “end-to-end” use of the network TCP is the most widely used protocol for reliable data transport, but is becoming ever more ineffective for higher and higher bandwidth-delay networks. UltraLight is exploring extensions to TCP (HSTCP, Westwood+, HTCP, FAST) designed to maintain fair- sharing of networks and, at the same time, to allow efficient, effective use of these networks. Currently FAST is in our “UltraLight Kernel” (a customized kernel). This was used in SC2005. We are planning to broadly deploy a related kernel with FAST. Longer term we can then continue with access to FAST, HS-TCP, Scalable TCP, BIC and others.

UltraLight Kernel Development Having a standard tuned kernel is very important for a number of UltraLight activities: 1. Breaking the 1 GB/sec disk-to-disk barrier 2. Exploring TCP congestion control protocols 3. Optimizing our capability for demos and performance The current kernel incorporates the latest FAST and Web100 patches over a kernel and includes the latest RAID and 10GE NIC drivers. The UltraLight web page ( ) has a Kernel page which provides the details off the Workgroup->Network page

MPLS/QoS for UltraLight UltraLight plans to explore the full range of end-to-end connections across the network, from best-effort, packet- switched through dedicated end-to-end light-paths. MPLS paths with QoS attributes fill a middle ground in this network space and allow fine-grained allocation of virtual pipes, sized to the needs of the application or user. UltraLight, in conjunction with the DoE/MICS funded TeraPaths effort, is working toward extensible solutions for implementing such capabilities in next generation networks TeraPaths Initial QoS test at BNL Terapaths URL:

Optical Path Plans Emerging “light path” technologies are becoming popular in the Grid community: They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. A major capability we are developing in Ultralight is the ability to dynamically switch optical paths across the node, bypassing electronic equipment via a fiber cross connect. The ability to switch dynamically provides additional functionality and also models the more abstract case where switching is done between colors (ITU grid lambdas).

MonaLisa to Manage LightPaths Dedicated modules to monitor and control optical switches Used to control CALIENT CIT CALIENT CIT GLIMMERGLASS CERN GLIMMERGLASS CERN ML agent system Used to create global path Used to create global path Algorithm can be extended to include prioritisation and pre-allocation Algorithm can be extended to include prioritisation and pre-allocation

Network monitoring is essential for UltraLight. We need to understand our network infrastructure and track its performance both historically and in real-time to enable the network as a managed robust component of our overall infrastructure. There are two ongoing efforts we are leveraging to help provide us with the monitoring capability required: IEPM MonALISA We are also looking at new tools like PerfSonar which may help provide a monitoring infrastructure for UltraLight. Monitoring for UltraLight

MonALISA UltraLight Repository The UL repository:

End-Systems performance Latest disk to disk over 10Gbps WAN: 4.3 Gbits/sec (536 MB/sec) - 8 TCP streams from CERN to Caltech; windows, 1TB file, 24 JBOD disks Quad Opteron AMD GHz processors with 3 AMD-8131 chipsets: bit/133MHz PCI-X slots. 3 Supermicro Marvell SATA disk controllers + 24 SATA 7200rpm SATA disks Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20% CPU utilization) Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20% CPU utilization) 10GE NIC 10 GE NIC – 7.5 Gbits/sec (memory-to-memory, with 52% CPU utilization) 10 GE NIC – 7.5 Gbits/sec (memory-to-memory, with 52% CPU utilization) 2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to- memory) 2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to- memory) Need PCI-Express, TCP offload engines Need PCI-Express, TCP offload engines Need 64 bit OS? Which architectures and hardware? Need 64 bit OS? Which architectures and hardware? Discussions are underway with 3Ware, Myricom and Supermicro to try to prototype viable servers capable of driving 10 GE networks in the WAN.

 Global Services support management and co-scheduling of multiple resource types, and provide strategic recovery mechanisms from system failures Global Services support management and co-scheduling of multiple resource types, and provide strategic recovery mechanisms from system failures Schedule decisions based on CPU, I/O, Network capability and End- to-end task performance estimates, incl. loading effects Schedule decisions based on CPU, I/O, Network capability and End- to-end task performance estimates, incl. loading effects Decisions are constrained by local and global policies Decisions are constrained by local and global policies Implementation: Autodiscovering, multithreaded services, service- engines to schedule threads, making the system scalable and robust Implementation: Autodiscovering, multithreaded services, service- engines to schedule threads, making the system scalable and robust Global Services Consist of: Global Services Consist of:  Network and System Resource Monitoring, to provide pervasive end-to-end resource monitoring info. to HLS  Network Path Discovery and Construction Services, to provide network connections appropriate (sized/tuned) to the expected use  Policy Based Job Planning Services, balancing policy, efficient resource use and acceptable turnaround time  Task Execution Services, with job tracking user interfaces, incremental re-planning in case of partial incompletion These types of services are required to deliver a managed network. Work along these lines is planned for OSG and future proposals to NSF and DOE. These types of services are required to deliver a managed network. Work along these lines is planned for OSG and future proposals to NSF and DOE. UltraLight Global Services

UltraLight Application in 2008 Node1> fts –vvv –in mercury.ultralight.org:/data01/big/zmumu05687.root –out venus.ultralight.org:/mstore/events/data –prio 3 –deadline +2:50 –xsum FTS: Initiating file transfer setup… FTS: Remote host responds ready FTS: Contacting path discovery service PDS: Path discovery in progress… PDS:Path RTT ms, best effort path bottleneck is 10 GE PDS:Path options found: PDS:Lightpath option exists end-to-end PDS:Virtual pipe option exists (partial) PDS:High-performance protocol capable end-systems exist FTS: Requested transfer 1.2 TB file transfer within 2 hours 50 minutes, priority 3 FTS: Remote host confirms available space for FTS: End-host agent contacted…parameters transferred EHA: Priority 3 request allowed for EHA: request scheduling details EHA: Lightpath prior scheduling (higher/same priority) precludes use EHA: Virtual pipe sizeable to 3 Gbps available for 1 hour starting in 52.4 minutes EHA: request monitoring prediction along path EHA: FAST-UL transfer expected to deliver 1.2 Gbps (+0.8/-0.4) averaged over next 2 hours 50 minutes

EHA: Virtual pipe (partial) expected to deliver 3 Gbps(+0/-0.3) during reservation; variance from unprotected section < 0.3 Gbps 95%CL EHA: Recommendation: begin transfer using FAST-UL using network identifier #5A-3C1. Connection will migrate to MPLS/QoS tunnel in 52.3 minutes. Estimated completion in 1 hour minutes. FTS: Initiating transfer between mercury.ultralight.org and venus.ultralight.org using #5A-3C1 EHA: Transfer initiated…tracking at URL: fts://localhost/FTS/AE13FF132-FAFE39A-44- 5A-3C1 EHA: Reservation placed for MPLS/QoS connection along partial path: 3Gbps beginning in 52.2 minutes: duration 60 minutes EHA: Reservation confirmed, rescode #9FA-39AF2E, note: unprotected network section included. FTS: Transfer proceeding, average 1.1 Gbps, GB transferred EHA: Connecting to reservation: tunnel complete, traffic marking initiated EHA: Virtual pipe active: current rate 2.98 Gbps, estimated completion in minutes FTS: Transfer complete, signaling EHA on #5A-3C1 EHA: Transfer complete received…hold for xsum confirmation FTS: Remote checksum processing initiated… FTS: Checksum verified—closing connection EHA: Connection #5A-3C1 completed…closing virtual pipe with 12.3 minutes remaining on reservation EHA: Resources freed. Transfer details uploading to monitoring node EHA: Request successfully completed, transferred 1.2 TB in 1 hour 41.3 minutes (transfer 1 hour 34.4 minutes)

Supercomputing 2005 The Supercomputing conference (SC05) in Seattle, Washington held another “Bandwidth Challenge” during the week of Nov th A collaboration of high-energy physicists from Caltech, Michigan, Fermilab and SLAC (with help from BNL: thanks Frank and John!) won achieving 131 Gbps peak network usage. This SC2005 BWC entry from HEP was designed to preview the scale and complexity of data operations among many sites interconnected with many 10 Gbps links

Total Transfer in 24 hours

BWC Take Away Summary Our collaboration previewed the IT Challenges of the next generation science at the High Energy Physics Frontier (for the LHC and other major programs): LHC Petabyte-scale datasets Petabyte-scale datasets Tens of national and transoceanic links at 10 Gbps (and up) Tens of national and transoceanic links at 10 Gbps (and up) 100+ Gbps aggregate data transport sustained for hours; We reached a Petabyte/day transport rate for real physics data 100+ Gbps aggregate data transport sustained for hours; We reached a Petabyte/day transport rate for real physics data The team set the scale and learned to gauge the difficulty of the global networks and transport systems required for the LHC mission Set up, shook down and successfully ran the system in < 1 week Set up, shook down and successfully ran the system in < 1 week Substantive take-aways from this marathon exercise: An optimized Linux ( FAST + NFSv4) kernel for data transport; after 7 full kernel-build cycles in 4 days An optimized Linux ( FAST + NFSv4) kernel for data transport; after 7 full kernel-build cycles in 4 days A newly optimized application-level copy program, bbcp, that matches the performance of iperf under some conditions A newly optimized application-level copy program, bbcp, that matches the performance of iperf under some conditions Extensions of Xrootd, an optimized low-latency file access application for clusters, across the wide area Extensions of Xrootd, an optimized low-latency file access application for clusters, across the wide area Understanding of the limits of 10 Gbps-capable systems under stress Understanding of the limits of 10 Gbps-capable systems under stress

UltraLight and ATLAS UltraLight has deployed and instrumented an UltraLight network and made good progress toward defining and constructing a needed ‘managed network’ infrastructure. The developments in UltraLight are targeted at providing needed capabilities and infrastructure for LHC. We have some important activities which are ready for additional effort: Achieving 10GE disk-to-disk transfers using single servers Achieving 10GE disk-to-disk transfers using single servers Evaluating TCP congestion control protocols over UL links Evaluating TCP congestion control protocols over UL links Deploying embryonic network services to further the UL vision Deploying embryonic network services to further the UL vision Implementing some forms of MPLS/QoS and Optical Path control as part of standard UltraLight operation Implementing some forms of MPLS/QoS and Optical Path control as part of standard UltraLight operation Enabling automated end-host tuning and negotiation Enabling automated end-host tuning and negotiation We want to extend the footprint of UltraLight to include as many interested sites as possible to help insure its developments meet the LHC needs. Questions?

Michigan Setup for BWC

Effort at Michigan Michigan connected three wavelengths for SC2005 and was supported by the School of Information, ITCOM, CITI and the Medical School for this BWC. We were able to fill almost 30 Gbps during BWC

Details of Transfer Amounts Within 2 hours an aggregate of TB (Terabyte) was transferred, with sustained transfer rates ranging from 90 Gbps to 150 Gbps and a measured peak of 151 Gbps. During the whole day (24 hours) on which the bandwidth challenge took place approximately 475 TB where transferred. This number (475 TB) is lower than the Caltech/SLAC/FNAL/Michigan led team was capable of as they did not always have exclusive access to waves, outside the bandwidth challenge time slot. If you multiply the 2 hours where TB was transferred, times 12 (to represent a whole day) you get approximately 1.1 PB (Petabyte). Transferring this amount of data in 24 hours, is equivalent to a transfer rate of 3.8 (DVD) movies per second, assuming an average size of 3.5 GB per movie.