CEOS workshop on Grid computing Slides mainly from Olivier Martin

Slides:



Advertisements
Similar presentations
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
Advertisements

1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GriPhyN EAC Meeting (Jan. 7, 2002)Carl Kesselman1 University of Southern California GriPhyN External Advisory Committee Meeting Gainesville,
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
Practical Distributed Authorization for GARA Andy Adamson and Olga Kornievskaia Center for Information Technology Integration University of Michigan, USA.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
Open Science Grid & its Security Technical Group ESCC22 Jul 2004 Bob Cowles
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
1 DataTAG-WP4 and GLUE ( mainly from A.Ghiselli II INFN-Grid workshop) Mirco Mazzucato Gridstart meeting at HPC Cetraro.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
Javier Orellana JRA4 Coordinator Face to Face Partners Meeting University College London 11 December 2003 EGEE is proposed as a project funded by the European.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Grid activities in Czech Republic Jiri Kosina Institute of Physics of the Academy of Sciences of the Czech Republic
Network Flow Bandwidth Requirements. UK GRID Networking Research Projects Network Infrastructure GRS, GRID resource management MB-NG, QoS Features GRIDprobe,
Bob Jones EGEE Technical Director
Status of Task Forces Ian Bird GDB 8 May 2003.
BaBar-Grid Status and Prospects
Grid Optical Burst Switched Networks
“A Data Movement Service for the LHC”
Welcome Network Virtualization & Hybridization Thomas Ndousse
GENUS Virtualisation Service for GÉANT and European NRENs
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
TESTBED (Technical)
G R N E T-2 Update Tryfon Chiotis, Technical Director
SURFnet6: the Dutch hybrid network initiative
Ian Bird GDB Meeting CERN 9 September 2003
Grid related projects CERN openlab LCG EDG F.Fluckiger
Establishing End-to-End Guaranteed Bandwidth Network Paths Across Multiple Administrative Domains The DOE-funded TeraPaths project at Brookhaven National.
Networking between China and Europe
The INFN TIER1 Regional Centre
WP7 objectives, achievements and plans
LCG middleware and LHC experiments ARDA project
CERN-USA connectivity update DataTAG project
Connecting the European Grid Infrastructure to Research Communities
The DataTAG Project Olivier H. Martin
A conceptual model of grid resources and services
CERN external networking update & DataTAG project
The DataTAG Project Olivier H. Martin CERN - IT Division
DataTAG Project update
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
LCG experience in Integrating Grid Toolkits
Wide Area Networking at SLAC, Feb ‘03
Presented at the GGF3 conference 8th October Frascati, Italy
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Report on GLUE activities 5th EU-DataGRID Conference
Presentation at University of Twente, The Netherlands
Future EU Grid Projects
Wide-Area Networking at SLAC
Internet2 Spring Member meeting
Interoperable Measurement Frameworks: Internet2 E2E piPEs and NLANR Advisor Eric L. Boyd Internet2 17 April 2019.
The UltraLight Program
Presented at the 4th DataGrid Conference
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

CEOS workshop on Grid computing Slides mainly from Olivier Martin The DataTAG Project CEOS workshop on Grid computing 6th May 2002, Frascati-Rome Mirco Mazzucato INFN-Padova Slides mainly from Olivier Martin

Presentation outline CERN networking DataTAG project Grid networking requirements DataTAG project Partners Goals Positioning Grid networking issues Concluding remarks

Long term Data Grids networking requirements A basic assumption of Data intensive Grids is that the underlying network is more or less invisible. A prerequisite, therefore, is very fast links between Grid nodes Is the hierarchical structure of European academic R&E networks and the pan-European interconnection backbone GEANT a sustainable long term model, in order to adequately support Data intensive Grids such as the LHC Grid (Large Hadron Collider)? Are lambda Grids, feasible & affordable? Interesting to note that the original LHC computing model which was itself hierarchical (Tier0, Tier1, etc) appears to be evolving towards a somewhat more flexible model.

Evolution of LHC bandwidth requirements 622 Mbps between CERN and some (or all) LHC regional centers by 2005 “There seems to be no other way to reach the LHC target than to significantly increase the budget for external networking by a factor of 3 to5, depending on when the bandwidth should be delivered”. LHC Bandwidth Requirements (2001) 2.5 Gbps between CERN and some (or all) LHC regional centers by 2005 “In any case, a great deal of optimism is needed in order to reach the LHC target!” LHC Bandwidth Requirements (2002) 10 Gbps between CERN and some (or all) LHC regional centers by 2006 It is very likely that the first long haul 10Gbps circuits will already appear at CERN in 2003/2004. Evolution of circuit costs

What happened? As a result of the EU wide deregulation of the Telecom that took place in 1998, there is an extraordinary situation today where circuit prices have gone much below the most optimistic forecasts! An issue: will this trend continue? What’s the most efficient usage of network? Especially for transatlantic connections

The DataTAG Project http://www.datatag.org

Funding agencies Cooperating Networks

EU partners

Associated US partners

CEOS Workshop- Frascati The project European partners: INFN (IT), PPARC (UK), University of Amsterdam (NL) and CERN, as project coordinator. INRIA (FR) will join in June/July2002. ESA/ESRIN (IT) will provide Earth Observation demos together with NASA. Budget: 3.98 MEUR Start date: January, 1, 2002 Duration: 2 years (aligned on DataGrid) Funded manpower: ~ 15 persons/year 11/12/2018 CEOS Workshop- Frascati

US Funding & collaborations US NSF support through the existing collaborative agreement with CERN (Eurolink award). US DoE support through the CERN-USA line consortium. Significant contributions to the DataTAG workplan have been made by Andy Adamson (University of Michigan), Jason Leigh (EVL@University of Illinois), Joel Mambretti (Northwestern University), Brian Tierney (LBNL). Strong collaborations already in place with ANL, Caltech, FNAL, SLAC, University of Michigan, as well as Internet2 and ESnet. 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati In a nutshell Two main focus: Grid related network research (WP2, WP3) Interoperability between European and US Grids (WP4) 2.5 Gbps transatlantic lambda between CERN (Geneva) and StarLight (Chicago) around July 2002 (WP1). Dedicated to research (no production traffic) Fairly unique multi-vendor testbed with layer2 and layer 3 capabilities In principle open to other EU Grid projects as well as ESA for demonstrations 11/12/2018 CEOS Workshop- Frascati

Multi-vendor testbed with layer3 as well as layer2 capabilities INFN (Bologna) STARLIGHT (Chicago) Abilene CERN (Geneva) GEANT ESnet 1.25Gbps Juniper Juniper 2.5Gbps Cisco 6509 M M Alcatel Alcatel Starlight GBE 622Mbps Cisco Cisco 11/12/2018 M= Layer 2 Mux CEOS Workshop- Frascati

CEOS Workshop- Frascati Goals End to end Gigabit Ethernet performance using innovative high performance transport protocols. Assess & experiment inter-domain QoS and bandwidth reservation techniques. Interoperability between some major GRID projects in Europe and North America DataGrid as reference…. possibly other EU funded Grid projects PPDG, GriPhyN, Teragrid, iVDGL (USA) 11/12/2018 CEOS Workshop- Frascati

Major 2.5 Gbps circuits between Europe & USA DataTAG project NewYork Abilene UK SuperJANET4 IT GARR-B STAR-LIGHT ESNET GEANT CERN MREN NL SURFnet STAR-TAP Major 2.5 Gbps circuits between Europe & USA

CEOS Workshop- Frascati Project positioning Why yet another 2.5 Gbps transatlantic circuit? Most existing or planned 2.5 Gbps transatlantic circuits are for production, which makes them basically not suitable for advanced networking experiments that require a great deal of operational flexibility in order to investigate new application driven network services, e.g.: deploying new equipment (routers, G-MPLS capable multiplexers), activating new functionality (QoS, MPLS, distributed VLAN) The only known exception to date is the Surfnet circuit between Amsterdam & Chicago (Starlight) Concerns: How far beyond Starlight can DataTAG extend? How fast will US research network infrastructure match that of Europe! 11/12/2018 CEOS Workshop- Frascati

Major Grid networking issues QoS (Quality of Service) still largely unresolved on a wide scale because of complexity of deployment TCP/IP performance over high bandwidth, long distance networks The loss of a single packet will affect a 10Gbps stream with 200ms RTT (round trip time) for 5 hours. During that time the average throughput will be 7.5 Gbps. On the 2.5Gbps DataTAG circuit with 100ms RTT, this translates to 38 minutes recovery time, during that time the average throughput will be 1.875Gbps. Line Error rates A 2.5 Gbps circuit can absorb 0.2 Million packets/second Bit error rates of 10E-9 means one packet loss every 250 mseconds Bit error rates of 10E-11 means one packet loss every 25 seconds End to end performance in the presence of firewalls There is a lack of high performance firewalls, can we rely on products becoming available or should a new architecture be evolved? Evolution of LAN infrastructure to 1Gbps then 10Gbps Uniform end to end performance 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati Concluding remarks The dream of abundant bandwith has now become a hopefully lasting reality! Major transport protocol issues still need to be resolved. Large scale deployment of bandwidth greedy applications still remain to be done, Proof of concept has yet to be made. 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati Workplan (1) WP1: Provisioning & Operations (P. Moroni/CERN) Will be done in cooperation with DANTE & National Research & Education Networks (NREN) Two main issues: Procurement (largely done already for what concerns the circuit, equipment still to be decided). Routing, how can the DataTAG partners access the DataTAG circuit across GEANT and their national network? Funded participants: CERN(1FTE), INFN (0.5FTE) WP5: Information dissemination and exploitation (CERN) Funded participants: CERN(0.5FTE) WP6: Project management (CERN) Funded participants: CERN(2FTE) 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati Workplan (2) WP2: High Performance Networking (Robin Tasker/PPARC) High performance Transport tcp/ip performance over large bandwidth*delay networks Alternative transport solutions using: Modified TCP/IP stack UDP based transport conceptually similar to rate based TCP End to end inter-domain QoS Advance network resource reservation Funded participants: PPARC (2FTE), INFN (2FTE), UvA (1FTE), CERN(1FTE) 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati Workplan (3) WP3: Bulk Data Transfer & Application performance monitoring (Cees deLaat/UvA) Performance validation End to end user performance Validation Monitoring Optimization Application performance Netlogger Funded participants: UvA (2FTE), CERN(0.6FTE) 11/12/2018 CEOS Workshop- Frascati

WP4 Workplan (Antonia Ghiselli & Cristina Vistoli / INFN) Main Subject: Interoperability between EU and US Grids services from DataGrid, GriPhyN, PPDG and in collaboration with iVDGL, for the HEP applications. Objectives: Produce an assessment of interoperability solutions Provide test environment to LHC Applications to extend existing use-cases to test interoperability of the grid components Provide input to a common Grid LHC solution Support EU-US Integrated grid deployment Funded participants: INFN (6FTE), PPARC (1FTE), UvA (1FTE) 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati WP4 Tasks Assuming the same grid basic services (gram,gsi,gris) between the differen grid projects, the main issues are: 4.1 resource discovery, coord. C.Vistoli 4.2 authorization/VO managemnt, coord. R.Cecchini 4.3 interoperability of collective services between EU-US grid domains, coord. F.Donno 4.4 test applications, contact people from each application : Atlas / L.Perini, CMS / C.Grandi, Alice / P.Cerello 11/12/2018 CEOS Workshop- Frascati

WP4.1 - Resource Discovery Objectives Enabling an interoperable system that allows for the discovery and access of the Grid services available at participant sites of all Grid domains, in particular between EU and US Grids. Compatibility of the Resource Discovery System with the existent components/services of the available GRID systems. Subproject on information schema established 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati WP4.2 - Objectives Identify Authentication, Authorization and Accounting (AAA) mechanisms allowing interoperability between grids Compatibility of the AAA mechanisms with the existing components/services of the available GRID systems Authorization/VO management subproject established 11/12/2018 CEOS Workshop- Frascati

CEOS Workshop- Frascati WP4.3 / WP4.4 - Objectives Identify grid elements in EU and US grid projects, Identify common components in the testbeds used by the HEP experiments for semi-production activities in EU and US and classify them in an architectural framework. Plan and Setup environment with common EU-US services. Test common solutions in a EU-US domain in collaboration with iVDGL. 11/12/2018 CEOS Workshop- Frascati

DataTAG/WP4 framework and relationships Grid projects: DataGrid PPDG Griphyn LCG Globus Condor ….. input feedback ….. Grid Interoperability Activities: DataTAG/WP4 iVDGL HICB/HIJTB GGF Integration stardardization Proposals ….. Applications: LHC experiments CDF Babar ESA 11/12/2018 CEOS Workshop- Frascati

Summary: The interoperability issues (1) 1) Certificates. Solved 2) GSI security. OK but users ask for improved error reporting for a production infrastructure  3) Authorization and VO management. A joint subproject has started for a common solution 4) Information Schema. Work is progressing well and the group should be able to propose a common solution.  5) GIIS structure and hierarchies. Sub-project in the pipeline. General issue of the different vision of the information system based on LDAP or on R-GMA still open  11/12/2018 CEOS Workshop- Frascati

Summary: The interoperability issues (2) 6) Scheduling, use of JDL and experiment interface Regular meetings are taking place between EDG WP1 and Condor/ PPDG people. Expected to produce a common recommendation.   7) Data management. A good collaboration between EDG WP2 and Globus/PPDG teams. Should be able to make a common recommendation on the usage of GDMP, Replica Catalog and Replica Manager.   8) Packaging. LCG should have common release, packaging and installation tools. LCG application effort to define a common LHC solution started   11/12/2018 CEOS Workshop- Frascati