The EU DataTAG Project Olivier H. Martin CERN - IT Division Presented at the Grid Workshop LISHEP conference 7th February Rio de Janeiro, Brazil Olivier H. Martin CERN - IT Division
The project Two main focus: Grid related network research Interoperability between European and US Grids 2.5 Gbps transatlantic lambda between CERN (Geneva) and StarLight (Chicago) dedicated to research (no production traffic) Expected outcomes: Hide complexity of Wide Area Networking, i.e. the network becomes truly transparent. Better interoperability between GRID projects in Europe and North America DataGrid, possibly other EU funded Grid projects PPDG, GriPhyN, DTF, iVDGL (USA) 11/27/2018 The EU DataTAG Project
The project (cont) European partners: INFN (IT), PPARC (UK), University of Amsterdam (NL) and CERN, as project coordinator. INRIA (FR) & ESA (European Space Agency) will join soon. Significant contributions to the DataTAG workplan have been made by Jason Leigh (EVL@University of Illinois), Joel Mambretti (Northwestern University), Brian Tierney (LBNL). Strong collaborations already in place with ANL, Caltech, FNAL, SLAC, University of Michigan, as well as Internet2 and ESnet. Budget: 3.98 MEUR Start date: January, 1, 2002 - Duration: 2 years Funded manpower: ~ 15 persons/year NSF support through the existing collaborative agreement with CERN (Eurolink award) 11/27/2018 The EU DataTAG Project
DataTAG project Abilene UK IT ESNET CERN GEANT MREN NL NewYork SuperJANET4 IT GARR-B STAR-LIGHT ESNET GEANT CERN MREN NL SURFnet STAR-TAP
DataTAG planned set up (second half 2002) DataTAG test equipment CERN PoP Chicago DataTAG test equipment DataTAG test equipment STARLIGHT UvA INFN PPARC …... (Qwest NBC PoP) DataTAG test equipment ESNET GEANT ABILENE 2.5 Gb DataGRID PPDG iVDGL CERN CIXP DTF GriPhyN DataTAG test equipment
Workplan (1) WP5: Information dissemination and exploitation (CERN) WP1: Provisioning & Operations (P. Moroni/CERN) Will be done in cooperation with DANTE Two major issues: Procurement Routing, how can the DataTAG partners have transparent access to the DataTAG circuit across GEANT and their national network? WP5: Information dissemination and exploitation (CERN) WP6: Project management (CERN) 11/27/2018 The EU DataTAG Project
Workplan (2) WP2: High Performance Networking (Robin Tasker/PPARC) High performance Transport tcp/ip performance over large bandwidth*delay networks Alternative transport solutions End to end inter-domain QoS Advance network resource reservation 11/27/2018 The EU DataTAG Project
Workplan (3) WP3: Bulk Data Transfer & Application performance monitoring (Cees deLaat/UvA) Performance validation End to end user performance Validation Monitoring Optimization Application performance Netlogger 11/27/2018 The EU DataTAG Project
Workplan (4) WP4: Interoperability between Grid Domains (Antonia Ghiselli/INFN) GRID resource discovery Access policies, authorization & security Identify major problems Develop inter-Grid mechanisms able to interoperate with domain specific rules Interworking between domain specific Grid services Test Applications Interoperability, performance & scalability issues 11/27/2018 The EU DataTAG Project
Planning details The lambda availability is expected in the second half of 2002 Initially, test systems will be either at CERN or connect via GEANT GEANT is expected to provide VPNs (or equivalent) for Datagrid and/or access to the GEANT PoPs. Later, it is hoped that dedicated lambdas for Datagrid will be made available through GEANT or other initiatives (e.g. Flag Telecom) Initially, a 2.5 Gbps POS link WDM later, depending on equipment availability 11/27/2018 The EU DataTAG Project
The STAR LIGHT Next generation STAR TAP with the following main distinguishing features: Neutral location (Northwestern University) 1/10 Gigabit Ethernet based Multiple local loop providers Optical switches for advanced experiments The STAR LIGHT will provide 2*622 Mbps ATM connection to the STAR TAP Started in July 2001 Also hosting other advanced networking projects in Chicago & State of Illinois N.B. Most European Internet Exchanges Points have already been implemented along the same lines. 11/27/2018 The EU DataTAG Project
StarLight Infrastructure …Soon, Star Light will be an optical switching facility for wavelengths University of Illinois at Chicago
Evolving StarLight Optical Network Connections Asia-Pacific SURFnet, CERN Vancouver CA*net4 CA*net4 Seattle Portland U Wisconsin Chicago* NYC PSC San Francisco IU DTF 40Gb NCSA Asia-Pacific Caltech Atlanta SDSC *ANL, UIC, NU, UC, IIT, MREN AMPATH
OMNInet Technology Trial @ StarLight West Taylor Evanston GE 10GE l 10GE l GE OPTera Metro 5200 Optical Switching Platform Optical Switching Platform Passport 8600 Passport 8600 Application Cluster Application Cluster OPTera Metro 5200 OPTera Metro 5200 Lakeshore S. Federal 10GbE WAN To Ca*Net4 (future) GE 10GE l 10GE l GE OPTera Metro 5200 Optical Switching Platform Application Cluster Optical Switching Platform Passport 8600 Passport 8600 Application Cluster OPTera Metro 5200 A four site network in Chicago -- the first 10GE service trial! A test bed for all-optical switching, advanced high-speed services, and and new applications including high-performance streaming media and collaborative applications for health-care, financial, and commercial services. Partners: SBC, Nortel, International Center for Advanced Internet Research (iCAIR) at Northwestern, Electronic Visualization Lab at Univ of Illinois, Argonne National Lab, CANARIE OMNInet Technology Trial November 2001
Major Grid networking issues QoS (Quality of Service) still largely unresolved on a wide scale because of complexity of deployment TCP/IP performance over high bandwidth, long distance networks The loss of a single packet will affect a 10Gbps stream with 200ms RTT (round trip time) for 5 hours. During that time the average throughput will be 7.5 Gbps. End to end performance in the presence of firewalls There is a lack of products, can we rely on products becoming available or should a new architecture be evolved? 11/27/2018 The EU DataTAG Project
Multiple Gigabit/second networking Facts, Theory & Practice Gigabit Ethernet (GBE) nearly ubiquitous 10GBE coming very soon 10Gbps circuits have been available for some time already in Wide Area Networks (WAN). 40Gbps is in sight on WANs, but what after? THEORY: 1GB file transferred in 11 seconds over a 1Gbps circuit (*) 1TB file transfer would still require 3 hours and 1PB file transfer would require 4 months PRACTICE: Rather disappointing results are obtained on high bandwidth, large RTT networks. Multiple streams have become the norm (*) according to the 75% empirical rule 11/27/2018 The EU DataTAG Project
Single stream vs Multiple streams effect of a single packet loss (e. g Single stream vs Multiple streams effect of a single packet loss (e.g. link error, buffer overflow) Streams/Throughput 10 5 1 7.5 4.375 2 9.375 10 Avg. 7.5 Gbps Throughput Gbps 7 5 Avg. 6.25 Gbps Avg. 4.375 Gbps 5 2.5 Avg. 3.75 Gbps T = 2.37 hours! (RTT=200msec, MSS=1500B) T T T Time T 11/27/2018 The EU DataTAG Project