Shawn P. McKee / University of Michigan International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science May 25, 2005.

Slides:



Advertisements
Similar presentations
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
Advertisements

Shawn P. McKee University of Michigan University of Michigan UltraLight Meeting, NSF January 26, 2005 Network Working Group Report.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
DWDM-RAM: DARPA-Sponsored Research for Data Intensive Service-on-Demand Advanced Optical Networks DWDM RAM DWDM RAM BUSINESS WITHOUT BOUNDARIES.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
The Future of GÉANT: The Future Internet is Present in Europe Vasilis Maglaris Professor of Electrical & Computer Engineering, NTUA Chairman, NREN Policy.
Integrating Network and Transfer Metrics to Optimize Transfer Efficiency and Experiment Workflows Shawn McKee, Marian Babik for the WLCG Network and Transfer.
Chapter 9: Moving to Design
Router Architectures An overview of router architectures.
Lawrence G. Roberts CEO Anagran September 2005 Advances Toward Economic and Efficient Terabit LANs and WANs.
UltraLight: Network & Applications Research at UF Dimitri Bourilkov University of Florida CISCO - UF Collaborative Team Meeting Gainesville, FL, September.
Assessment of Core Services provided to USLHC by OSG.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Is Lambda Switching Likely for Applications? Tom Lehman USC/Information Sciences Institute December 2001.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology State of the art in the use of long distance network International.
Grid Leadership Avery –PI of GriPhyN ($11 M ITR Project) –PI of iVDGL ($13 M ITR Project) –Co-PI of CHEPREO –Co-PI of UltraLight –President of SESAPS Ranka.
1 High-Level Carrier Requirements for Cross Layer Optimization Dave McDysan Verizon.
Shawn McKee / University of Michigan USATLAS Tier1 & Tier2 Network Planning Meeting December 14, BNL UltraLight Overview.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
1 VINCI : Virtual Intelligent Networks for Computing Infrastructures An Integrated Network Services System to Control and Optimize Workflows in Distributed.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Shawn McKee University of Michigan University of Michigan UltraLight: A Managed Network Infrastructure for HEP CHEP06, Mumbai, India February 14, 2006.
Bob Jones Technical Director CERN - August 2003 EGEE is proposed as a project to be funded by the European Union under contract IST
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
The Internet2 HENP Working Group Internet2 Spring Meeting April 9, 2003.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
9 Systems Analysis and Design in a Changing World, Fourth Edition.
Project Coordination R. Cavanaugh University of Florida.
1 Role of Ethernet in Optical Networks Debbie Montano Director R&E Alliances Internet2 Member Meeting, Apr 2006.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
The Grid Effort at UF Presented by Craig Prescott.
Rick Cavanaugh, University of Florida CHEP06 Mumbai, 13 February, 2006 An Ultrascale Information Facility for Data Intensive Research.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
The Design and Demonstration of the UltraLight Network Testbed Presented by Xun Su GridNets 2006, Oct.
HEP and NP SciDAC projects: Key ideas presented in the SciDAC II white papers Robert D. Ryne.
Lawrence H. Landweber National Science Foundation SC2003 November 20, 2003
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
NORDUnet NORDUnet e-Infrastrucure: Grids and Hybrid Networks Lars Fischer CTO, NORDUnet Fall 2006 Internet2 Member Meeting, Chicago.
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
S. Ravot, J. Bunn, H. Newman, Y. Xia, D. Nae California Institute of Technology CHEP 2004 Network Session September 1, 2004 Breaking the 1 GByte/sec Barrier?
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Fall 2005 Internet2 Member Meeting International Task Force Julio Ibarra, PI Heidi Alvarez, Co-PI Chip Cox, Co-PI John Silvester, Co-PI September 19, 2005.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
US ATLAS Tier-2 Networking Shawn McKee University of Michigan US ATLAS Tier-2 Meeting San Diego, March 8 th, 2007.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Fermilab Cal Tech Lambda Station High-Performance Network Research PI Meeting BNL Phil DeMar September 29, 2005.
Bob Jones EGEE Technical Director
Grid Optical Burst Switched Networks
California Institute of Technology
Grid Computing.
The Internet2 HENP Working Group Internet2 Fall Meeting October 27, 2002 Hi Everyone, I am happy to have a chance to brief you on the High Energy and.
The UltraLight Program
Presentation transcript:

Shawn P. McKee / University of Michigan International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues for Global e-Science May 25, Daegu, Korea UltraLight: Overview and Status UltraLight 10GE Brazil UERJ, USP

The UltraLight Project UltraLight is A four year $2M NSF ITR funded by MPS A four year $2M NSF ITR funded by MPS Application driven Network R&D Application driven Network R&D A collaboration of BNL, Caltech, CERN, Florida, FIU, FNAL, Internet2, Michigan, MIT, SLAC A collaboration of BNL, Caltech, CERN, Florida, FIU, FNAL, Internet2, Michigan, MIT, SLAC Two Primary, Synergistic Activities Network “Backbone”: Perform network R&D / engineering Network “Backbone”: Perform network R&D / engineering Applications “Driver”: System Services R&D / engineering Applications “Driver”: System Services R&D / engineering Goal: Enable the network as a managed resource Meta-Goal: Enable physics analysis and discoveries which could not otherwise be achieved

 Flagship Applications: HENP, eVLBI, Biomedical “Burst” Imaging Flagship Applications: HENP, eVLBI, Biomedical “Burst” Imaging Use the new capability to create prototype computing infrastructures for CMS and ATLAS Use the new capability to create prototype computing infrastructures for CMS and ATLAS Extend and enhance their Grid infrastructures, promoting the network to an active, managed element Extend and enhance their Grid infrastructures, promoting the network to an active, managed element Exploiting the MonALISA, GEMS, GAE & Sphinx Infrastr. Exploiting the MonALISA, GEMS, GAE & Sphinx Infrastr. Enable Massive Data Transfers to Coexist with Production Traffic and Real-Time Collaborative Streams Enable Massive Data Transfers to Coexist with Production Traffic and Real-Time Collaborative Streams Support “Terabyte-scale” Data Transactions (Min. not Hrs.) Support “Terabyte-scale” Data Transactions (Min. not Hrs.) Extend Real-Time eVLBI to the 10 – 100 Gbps Range Extend Real-Time eVLBI to the 10 – 100 Gbps Range Higher Resolution (Greater Physics Reach), Faster Turnaround Higher Resolution (Greater Physics Reach), Faster Turnaround New RT Correlation Approach for Important Astrophysical Events, Such As Gamma Ray Bursts or Supernovae New RT Correlation Approach for Important Astrophysical Events, Such As Gamma Ray Bursts or Supernovae Impact on the Science

  H. Newman, Director & PI  Steering Group: Avery, Bunn, McKee, Summerhill, Ibarra, Whitney  TC Group: HN + TC Leaders: Cavanaugh, McKee, Kramer, Van Lingen; Summerhill, Ravot, Legrand PROJECT Coordination 1. Rick Cavanaugh, Project Coordinator  Overall Coordination and Deliverables. 2. Shawn McKee, Network Technical Coordinator  Coord. of Building the UltraLight Network Infrastructure (with Ravot) 3. Frank Van Lingen, Application Services Technical Coordinator  Coord. Building the System of Grid- and Web-Services for applications (with Legrand) 4. Laird Kramer, Education and Outreach Coordinator  Develop learning sessions and collaborative class and research projects for undergraduates and high school students  EXTERNAL ADVISORY BOARD: Being formed. Project Coordination Plan

UltraLight Backbone UltraLight has a non-standard core network with dynamic links and varying bandwidth inter-connecting our nodes.  Optical Hybrid Global Network The core of UltraLight will dynamically evolve as function of available resources on other backbones such as NLR, HOPI, Abilene or ESnet. The main resources for UltraLight: LHCnet (IP, L2VPN, CCC) LHCnet (IP, L2VPN, CCC) Abilene (IP, L2VPN) Abilene (IP, L2VPN) ESnet (IP, L2VPN) ESnet (IP, L2VPN) Cisco NLR wave (Ethernet) Cisco NLR wave (Ethernet) HOPI NLR waves (Ethernet; provisioned on demand) HOPI NLR waves (Ethernet; provisioned on demand) UltraLight nodes: Caltech, SLAC, FNAL, UF, UM, StarLight, CENIC PoP at LA, CERN UltraLight nodes: Caltech, SLAC, FNAL, UF, UM, StarLight, CENIC PoP at LA, CERN

International Partners One of the UltraLight program’s strengths is the large number of important international partners we have: AMPATH AARNet Brazil/UERJ/USP CA*net4 GLORIAD IEEAF Korea/KOREN NetherLight UKLight As well as collaborators from China, Japan and Taiwan. UltraLight is well positioned to develop and coordinate global advances to networks for LHC Physics

UltraLight Sites UltraLight currently has 10 participating core sites (shown alphabetically) The table provides a quick summary of the near term connectivity plans Details and diagrams for each site and its regional networks are shown in our technical report SiteDateTypeStorage Out of Band BNLMarchOC48TBDTBD CaltechJanuary 10 GE 1 TB May Y CERNJanuaryOC192 9 TB July Y FIUJanuaryOC12TBDY FNALMarch 10 GE TBDTBD I2MarchMPLSL2VPNTBDTBD MITMayOC48TBDTBD SLACJuly TBDTBD UFFebruary 1 TB May Y UMApril 10 GE 9 TB July Y

UltraLight envisions a 4 year program to deliver a new, high-performance, network-integrated infrastructure: Phase I will last 12 months and focus on deploying the initial network infrastructure and bringing up first services (Note: we are well on our way, the network is almost up and the first services are being deployed) Phase II will last 18 months and concentrate on implementing all the needed services and extending the infrastructure to additional sites (We are entering this phase now) Phase III will complete UltraLight and last 18 months. The focus will be on a transition to production in support of LHC Physics; + eVLBI Astronomy Workplan/Phased Deployment

Implementation via “sharing” with HOPI/NLR Also LA-CHI Cisco/NLR Research Wave DOE UltraScienceNet Wave SNV-CHI (LambdaStation) Connectivity to FLR to be determined MIT involvement welcome, but unfunded AMPATH UERJ, USP UltraLight Network: PHASE I

Move toward multiple “lambdas” Bring in FLR, as well as BNL (and MIT) AMPATH UERJ, USP UltraLight Network: PHASE II

Move into production Optical switching fully enabled amongst primary sites Integrated international infrastructure AMPATH UERJ, USP UltraLight Network: PHASE III

  GOAL: Determine an effective mix of bandwidth-management techniques for this application-space, particularly: Best-effort/“scavenger” using effective ultrascale protocols Best-effort/“scavenger” using effective ultrascale protocols MPLS with QOS-enabled packet switching MPLS with QOS-enabled packet switching Dedicated paths arranged with TL1 commands, GMPLS Dedicated paths arranged with TL1 commands, GMPLS  PLAN: Develop, Test the most cost-effective integrated combination of network technologies on our unique testbed: 1. Exercise UltraLight applications on NLR, Abilene and campus networks, as well as LHCNet, and our international partners Progressively enhance Abilene with QOS support to protect production traffic Progressively enhance Abilene with QOS support to protect production traffic Incorporate emerging NLR and RON-based lightpath and lambda facilities Incorporate emerging NLR and RON-based lightpath and lambda facilities 2. Deploy and systematically study ultrascale protocol stacks (such as FAST) addressing issues of performance & fairness 3. Use MPLS/QoS and other forms of BW management, and adjustments of optical paths, to optimize end-to-end performance among a set of virtualized disk servers UltraLight Network Engineering

UltraLight: Effective Protocols The protocols used to reliably move data are a critical component of Physics “end-to-end” use of the network TCP is the most widely used protocol for reliable data transport, but is becoming ever more ineffective for higher and higher bandwidth-delay networks. UltraLight will explore extensions to TCP (HSTCP, Westwood+, HTCP, FAST) designed to maintain fair- sharing of networks and, at the same time, to allow efficient, effective use of these networks. UltraLight plans to identify the most effective fair protocol and implement it in support of our “Best Effort” network components.

MPLS/QoS for UltraLight UltraLight plans to explore the full range of end-to-end connections across the network, from best-effort, packet- switched through dedicated end-to-end light-paths. MPLS paths with QoS attributes fill a middle ground in this network space and allow fine-grained allocation of virtual pipes, sized to the needs of the application or user. UltraLight, in conjunction with the DoE/MICS funded TeraPaths effort, is working toward extensible solutions for implementing such capabilities in next generation networks TeraPaths Initial QoS test at BNL

Optical Path Plans Emerging “light path” technologies are becoming popular in the Grid community: They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. They can extend and augment existing grid computing infrastructures, currently focused on CPU/storage, to include the network as an integral Grid component. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. Those technologies seem to be the most effective way to offer network resource provisioning on-demand between end-systems. A major capability we are developing in Ultralight is the ability to dynamically switch optical paths across the node, bypassing electronic equipment via a fiber cross connect. The ability to switch dynamically provides additional functionality and also models the more abstract case where switching is done between colors (ITU grid lambdas).

MonaLisa to Manage LightPaths Dedicated modules to monitor and control optical switches Used to control CALIENT CIT CALIENT CIT GLIMMERGLASS CERN GLIMMERGLASS CERN ML agent system Used to create global path Used to create global path Algorithm can be extended to include prioritisation and pre-allocation Algorithm can be extended to include prioritisation and pre-allocation

Network monitoring is essential for UltraLight. We need to understand our network infrastructure and track its performance both historically and in real-time to enable the network as a managed robust component of our overall infrastructure. There are two ongoing efforts we are leveraging to help provide us with the monitoring capability required: IEPM MonALISA Both efforts have already made significant progress within UltraLight. We are working on the level of detail to track, as well as determining the most effect user interface and presentation. Monitoring for UltraLight

End-Systems performance Latest disk to disk over 10Gbps WAN: 4.3 Gbits/sec (536 MB/sec) - 8 TCP streams from CERN to Caltech; windows, 1TB file Quad Opteron AMD GHz processors with 3 AMD-8131 chipsets: bit/133MHz PCI-X slots. 3 Supermicro Marvell SATA disk controllers + 24 SATA 7200rpm SATA disks Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20% CPU utilization) Local Disk IO – 9.6 Gbits/sec (1.2 GBytes/sec read/write, with <20% CPU utilization) 10GE NIC 10 GE NIC – 7.5 Gbits/sec (memory-to-memory, with 52% CPU utilization) 10 GE NIC – 7.5 Gbits/sec (memory-to-memory, with 52% CPU utilization) 2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to- memory) 2*10 GE NIC (802.3ad link aggregation) – 11.1 Gbits/sec (memory-to- memory) Need PCI-Express, TCP offload engines Need PCI-Express, TCP offload engines A 4U server with 24 disks (9 TB) and a 10 GbE NIC is capable of 700 MBytes/sec in the LAN and ~500 MBytes/sec in a WAN is $ 25k today. Small server with a few disks (1.2 TB) capable of 120 Mbytes/sec (matching a GbE port) is $ 4K.

UltraLight/ATLAS Data Transfer Test UltraLight is interested in disk-to- disk systems capable of utilizing 10 Gbit networks We plan to begin testing by July Starting goal is to match the ~500 Mbytes/sec already achieved Target is to reach 1 GByte/sec by the end of the year

 Global Services support management and co-scheduling of multiple resource types, and provide strategic recovery mechanisms from system failures Global Services support management and co-scheduling of multiple resource types, and provide strategic recovery mechanisms from system failures Schedule decisions based on CPU, I/O, Network capability and End- to-end task performance estimates, incl. loading effects Schedule decisions based on CPU, I/O, Network capability and End- to-end task performance estimates, incl. loading effects Decisions are constrained by local and global policies Decisions are constrained by local and global policies Implementation: Autodiscovering, multithreaded services, service- engines to schedule threads, making the system scalable and robust Implementation: Autodiscovering, multithreaded services, service- engines to schedule threads, making the system scalable and robust Global Services Consist of: Global Services Consist of:  Network and System Resource Monitoring, to provide pervasive end-to-end resource monitoring info. to HLS  Policy Based Job Planning Services, balancing policy, efficient resource use and acceptable turnaround time  Task Execution Services, with job tracking user interfaces, incremental replanning in case of partial incompletion Exploit the SPHINX (UFl) framework for Grid-wide policy-based scheduling; extend its capability to the network domain Exploit the SPHINX (UFl) framework for Grid-wide policy-based scheduling; extend its capability to the network domain UltraLight Global Services

 o Application Frameworks Augmented to Interact Effectively with the Global Services (GS) GS Interact in Turn with the Storage Access & Local Execution Service Layers GS Interact in Turn with the Storage Access & Local Execution Service Layers o Apps. Provide Hints to High-Level Services About Requirements Interfaced also to managed Net and Storage services Interfaced also to managed Net and Storage services Allows effective caching, pre- fetching; opportunities for global and local optimization of thru-put Allows effective caching, pre- fetching; opportunities for global and local optimization of thru-put Make UltraLight Functionality Available to the Physics Applications, & Their Grid Production & Analysis Environments UltraLight Application Services

 1.Client Authenticates Against VO 2.Uses Lookup Service to Discover Available Grid Services 3.& 4. Contacts Data Location & Job Planner/Scheduler Services 5.& Generates a Job Plan 9.Starts Plan Execution by Sending Subtasks to Scheduled Execution Services 10.(a) Monitoring Service Sends Updates Back to the Client Application, While (b) Steering Service is Used by Clients or Agents to Modify Plan When Needed 11.Data is Returned to the Client Through Data Collection Service 12.Iterate in Next Analysis Cycle Important to note that the system helps interpret and optimize itself while “summarizing” the details for ease of use A User ’ s Perspective: GAE In the UltraLight Context

 Based at FIU, leveraging its CHEPREO and CIARA activities to provide students with opportunities in physics, astronomy and network research Based at FIU, leveraging its CHEPREO and CIARA activities to provide students with opportunities in physics, astronomy and network research  Alvarez, Kramer will help bridge and consolidate activities with GriPhyN, iVDGL and eVLBI GOAL: To inspire and train the next generation of physicists, astronomers and network scientists GOAL: To inspire and train the next generation of physicists, astronomers and network scientists PLAN: PLAN: 1. Integrate students in the core research and application integration activities at participating universities 2. Use the UltraLight testbed for student-defined network projects 3. Opportunities for (minority) FIU students to participate 4. Student & teacher involvement through REU, CIARA, Quarknet 5. Use UltraLight’s international reach to allow US students to participate in research experiences at int’l labs and accelerators, from their home institutions. A Three Day Workshop has been organized for June 6-10 to launch these efforts A Three Day Workshop has been organized for June 6-10 to launch these efforts UltraLight Educational Outreach

UltraLight Near-term Milestones Protocols: Integration and of FAST TCP (V.1) (July 2005) Integration and of FAST TCP (V.1) (July 2005) New MPLS and optical path-based provisioning (Aug 2005) New MPLS and optical path-based provisioning (Aug 2005) New TCP implementations testing (Aug-Sep, 2005) New TCP implementations testing (Aug-Sep, 2005) Optical Switching: Commission optical switch at the LA CENIC/NLR and CERN (May 2005) Commission optical switch at the LA CENIC/NLR and CERN (May 2005) Develop dynamic server connections on a path for TB transactions (Sep 2005) Develop dynamic server connections on a path for TB transactions (Sep 2005) Storage and Application Services: Evaluate/optimize drivers/parameters for I/O filesystems (April 2005) Evaluate/optimize drivers/parameters for I/O filesystems (April 2005) Evaluate/optimize drivers/parameters for 10 GbE server NICs (June 2005) Evaluate/optimize drivers/parameters for 10 GbE server NICs (June 2005) Select hardware for 10 GE NICs, Buses and RAID controllers (June-Sep 2005) Select hardware for 10 GE NICs, Buses and RAID controllers (June-Sep 2005) Breaking the 1 GByte/s barrier (Sep 2005) Breaking the 1 GByte/s barrier (Sep 2005) Monitoring and Simulation: Deployment of end-to-end monitoring framework. (Aug 2005) Deployment of end-to-end monitoring framework. (Aug 2005) Integration of tools/models to build a simulator for network fabric. (Dec 2005) Integration of tools/models to build a simulator for network fabric. (Dec 2005)Agents: Start development of Agents for resource scheduling (June 2005) Start development of Agents for resource scheduling (June 2005) Match scheduling allocations to usage policies (Sep 2005) Match scheduling allocations to usage policies (Sep 2005)WanInLab: Connect Caltech WanInLab to testbed (June 2005) Connect Caltech WanInLab to testbed (June 2005) Procedure to move new protocol stacks into field trials (June 2005) Procedure to move new protocol stacks into field trials (June 2005)

Summary: Network Progress For many years the Wide Area Network has been the bottleneck; this is no longer the case in many countries thus making deployment of a data intensive Grid infrastructure possible! Recent I2LSR records show for the first time ever that the network can be truly transparent; throughputs are limited by end-hosts Recent I2LSR records show for the first time ever that the network can be truly transparent; throughputs are limited by end-hosts Challenge shifted from getting adequate bandwidth to deploying adequate infrastructure to make effective use of it! Challenge shifted from getting adequate bandwidth to deploying adequate infrastructure to make effective use of it! Some transport protocol issues still need to be resolved; however there are many encouraging signs that practical solutions may now be in sight. 1GByte/sec disk to disk challenge. Today: 1 TB at 536 MB/sec from CERN to Caltech Still in Early Stages; Expect Substantial Improvements Still in Early Stages; Expect Substantial Improvements Next generation network and Grid system Extend and augment existing grid computing infrastructures (currently focused on CPU/storage) to include the network as an integral component. Extend and augment existing grid computing infrastructures (currently focused on CPU/storage) to include the network as an integral component.

Conclusions UltraLight promises to deliver a critical missing component for future eScience: the integrated, managed network We have a strong team in place, as well as a plan, to provide the needed infrastructure and services for production use by LHC turn-on at the end of 2007 We look forward to a busy productive year working on UltraLight! Questions? Conclusions