Download presentation
Presentation is loading. Please wait.
Published byCharles Briand Modified over 6 years ago
1
The DataTAG Project Olivier H. Martin CERN - IT Division
Presented at the International workshop on Grid and distributed computing 20th April 2002, BUCHAREST Olivier H. Martin CERN - IT Division
2
Presentation outline CERN networking DataTAG project
Internet connectivity CERN Internet Exchange Point (CIXP) Grid networking requirements Evolution of transatlantic circuit costs DataTAG project Partners Goals Positioning Grid networking issues Concluding remarks Appendix A: Detailed Workplan
3
Main Internet connections at CERN
Mission Oriented & World Health Org. Swiss National Research Network IN2P3 General purpose A&R and commodity Internet connections (Europe/USA/World) WHO 2.5Gbps SWITCH 155Mbps 45Mbps Europe GEANT (1.25/2500Mbps) 1Gbps USLIC 622Mbps USA 1Gbps CERN CIXP 2.5Gbps Commercial DataTAG Research
4
CERN’s Distributed Internet Exchange Point (CIXP)
Telecom Operators & dark fibre providers: Cablecom, COLT, France Telecom, Global Crossing, GTS/EBONE, KPNQwest, LDCom(*), Deutsche Telekom/Multilink, MCI/Worldcom, SIG, Sunrise/diAx, Swisscom (Switzerland), Swisscom (France), SWITCH (**), Thermelec, VTX/Smartphone. Internet Service Providers include: 3GMobile (*), Infonet, AT&T Global Network Services (formerly IBM), Cablecom, Callahan, Carrier1, Colt, DFI, Deckpoint, Deutsche Telekom, diAx (dplanet), Easynet, Ebone/GTS, Eunet/KPNQwest, France Telecom/OpenTransit, Global-One, InterNeXt, IS Internet Services (ISION), IS-Productions, Nexlink, Net Work Communications (NWC), PSI Networks (IProlink), MCI/Worldcom, Petrel, Renater, Sita/Equant(*), Sunrise, Swisscom IP-Plus, SWITCH, GEANT, VTX, UUnet. isp isp Telecom operators c i x p isp isp isp isp isp isp CERN firewall Telecom operators Cern Internal Network
5
Long term Data Grids networking requirements
A basic assumption of Data intensive Grids is that the underlying network is more or less invisible. A prerequisite, therefore, is very fast links between Grid nodes Is the hierarchical structure of European academic R&E networks and the pan-European interconnection backbone GEANT a sustainable long term model, in order to adequately support Data intensive Grids such as the LHC Grid (Large Hadron Collider)? Are lambda Grids, feasible & affordable? Interesting to note that the original LHC computing model which was itself hierarchical (Tier0, Tier1, etc) appears to be evolving towards a somewhat more flexible model.
6
Evolution of LHC bandwidth requirements
622 Mbps between CERN and some (or all) LHC regional centers by 2005 “There seems to be no other way to reach the LHC target than to significantly increase the budget for external networking by a factor of 3 to5, depending on when the bandwidth should be delivered”. LHC Bandwidth Requirements (2001) 2.5 Gbps between CERN and some (or all) LHC regional centers by 2005 “In any case, a great deal of optimism is needed in order to reach the LHC target!” LHC Bandwidth Requirements (2002) 10 Gbps between CERN and some (or all) LHC regional centers by 2006 It is very likely that the first long haul 10Gbps circuits will already appear at CERN in 2003/2004. What happened? Evolution of circuit costs
7
What happened? As a result of the EU wide deregulation of the Telecom that took place in 1998, there is an extraordinary situation today where circuit prices have gone much below the most optimistic forecasts! However, as a result, many Telecom Operators are having serious difficulties and it is very hard to make predictions about the evolution of prices in the future? Technology still improving fast, 3 to 10 Tbps per fiber, with more colors/lambdas per fiber and faster lambdas…. Installation costs are increasing rather than decreasing… Will the unavoidable consolidation result in stable, increasing or decreasing prices? Only time will tell us!
8
Evolution of transatlantic circuit costs
Since 1995, we have been tracking the prices of transatlantic circuits in order to assess the budget needed to meet the LHC bandwidth targets: The following scenarios have been considered: conservative (-20% per year) very plausible (-29% per year, i.e. prices halved every two years) Moore’s law (-37% per year, i.e. prices halved every 18 months) optimistic (-50% per year) N.B. Unlike raw circuits, where a price factor of 2 to 2.5 for 4 times the capacity is usually the norm, commodity Internet pricing are essentially linear (e.g. 150 CHF/Mbps)
15
The DataTAG Project
16
Funding agencies Cooperating Networks
17
EU partners
18
Associated US partners
19
The project European partners: INFN (IT), PPARC (UK), University of Amsterdam (NL) and CERN, as project coordinator. INRIA (FR) will join in June/July2002. ESA/ESRIN (IT) will provide Earth Observation demos together with NASA. Budget: 3.98 MEUR Start date: January, 1, 2002 Duration: 2 years (aligned on DataGrid) Funded manpower: ~ 15 persons/year 11/12/2018 The DataTAG Project
20
US Funding & collaborations
US NSF support through the existing collaborative agreement with CERN (Eurolink award). US DoE support through the CERN-USA line consortium. Significant contributions to the DataTAG workplan have been made by Andy Adamson (University of Michigan), Jason Leigh of Illinois), Joel Mambretti (Northwestern University), Brian Tierney (LBNL). Strong collaborations already in place with ANL, Caltech, FNAL, SLAC, University of Michigan, as well as Internet2 and ESnet. 11/12/2018 The DataTAG Project
21
In a nutshell Two main focus:
Grid related network research (WP2, WP3) Interoperability between European and US Grids (WP4) 2.5 Gbps transatlantic lambda between CERN (Geneva) and StarLight (Chicago) around July 2002 (WP1). Dedicated to research (no production traffic) Fairly unique multi-vendor testbed with layer2 and layer 3 capabilities In principle open to other EU Grid projects as well as ESA for demonstrations 11/12/2018 The DataTAG Project
22
Multi-vendor testbed with layer3 as well as layer2 capabilities
INFN (Bologna) STARLIGHT (Chicago) Abilene CERN (Geneva) GEANT ESnet 1.25Gbps Juniper Juniper 2.5Gbps Cisco 6509 M M Alcatel Alcatel Starlight GBE 622Mbps Cisco Cisco 11/12/2018 M= Layer 2 Mux The DataTAG Project
23
Goals End to end Gigabit Ethernet performance using innovative high performance transport protocols. Assess & experiment inter-domain QoS and bandwidth reservation techniques. Interoperability between some major GRID projects in Europe and North America DataGrid, possibly other EU funded Grid projects PPDG, GriPhyN, Teragrid, iVDGL (USA) 11/12/2018 The DataTAG Project
24
Major 2.5 Gbps circuits between Europe & USA
DataTAG project NewYork Abilene UK SuperJANET4 IT GARR-B STAR-LIGHT ESNET GEANT CERN MREN NL SURFnet STAR-TAP Major 2.5 Gbps circuits between Europe & USA
25
Project positioning Why yet another 2.5 Gbps transatlantic circuit?
Most existing or planned 2.5 Gbps transatlantic circuits are for production, which makes them basically not suitable for advanced networking experiments that require a great deal of operational flexibility in order to investigate new application driven network services, e.g.: deploying new equipment (routers, G-MPLS capable multiplexers), activating new functionality (QoS, MPLS, distributed VLAN) The only known exception to date is the Surfnet circuit between Amsterdam & Chicago (Starlight) Concerns: How far beyond Starlight can DataTAG extend? How fast will US research network infrastructure match that of Europe! 11/12/2018 The DataTAG Project
26
The STAR LIGHT Next generation STAR TAP with the following main distinguishing features: Neutral location (Northwestern University) 1/10 Gigabit Ethernet based Multiple local loop providers Optical switches for advanced experiments The STAR LIGHT will provide 2*622 Mbps ATM connection to the STAR TAP Started in July 2001 Also hosting other advanced networking projects in Chicago & State of Illinois N.B. Most European Internet Exchanges Points have already been deployed along the same principles. 11/12/2018 The DataTAG Project
27
Major Grid networking issues
QoS (Quality of Service) still largely unresolved on a wide scale because of complexity of deployment TCP/IP performance over high bandwidth, long distance networks The loss of a single packet will affect a 10Gbps stream with 200ms RTT (round trip time) for 5 hours. During that time the average throughput will be 7.5 Gbps. On the 2.5Gbps DataTAG circuit with 100ms RTT, this translates to 38 minutes recovery time, during that time the average throughput will be 1.875Gbps. Line Error rates A 2.5 Gbps circuit can absorb 0.2 Million packets/second Bit error rates of 10E-9 means one packet loss every 250 mseconds Bit error rates of 10E-11 means one packet loss every 25 seconds End to end performance in the presence of firewalls There is a lack of high performance firewalls, can we rely on products becoming available or should a new architecture be evolved? Evolution of LAN infrastructure to 1Gbps then 10Gbps Uniform end to end performance 11/12/2018 The DataTAG Project
28
Single stream vs Multiple streams effect of a single packet loss (e. g
Single stream vs Multiple streams effect of a single packet loss (e.g. link error, buffer overflow) Streams/Throughput 10 5 10 Avg. 7.5 Gbps Throughput Gbps 7 5 Avg Gbps Avg Gbps 5 2.5 Avg Gbps T = 2.37 hours! (RTT=200msec, MSS=1500B) T T T Time T 11/12/2018 The DataTAG Project
29
Concluding remarks The dream of abundant bandwith has now become a hopefully lasting reality! Major transport protocol issues still need to be resolved. Large scale deployment of bandwidth greedy applications still remain to be done, Proof of concept has yet to be made. 11/12/2018 The DataTAG Project
30
Workplan (1) WP1: Provisioning & Operations (P. Moroni/CERN)
Will be done in cooperation with DANTE & National Research & Education Networks (NREN) Two main issues: Procurement (largely done already for what concerns the circuit, equipment still to be decided). Routing, how can the DataTAG partners access the DataTAG circuit across GEANT and their national network? Funded participants: CERN(1FTE), INFN (0.5FTE) WP5: Information dissemination and exploitation (CERN) Funded participants: CERN(0.5FTE) WP6: Project management (CERN) Funded participants: CERN(2FTE) 11/12/2018 The DataTAG Project
31
Workplan (2) WP2: High Performance Networking (Robin Tasker/PPARC)
High performance Transport tcp/ip performance over large bandwidth*delay networks Alternative transport solutions using: Modified TCP/IP stack UDP based transport conceptually similar to rate based TCP End to end inter-domain QoS Advance network resource reservation Funded participants: PPARC (2FTE), INFN (2FTE), UvA (1FTE), CERN(1FTE) 11/12/2018 The DataTAG Project
32
Workplan (3) WP3: Bulk Data Transfer & Application performance monitoring (Cees deLaat/UvA) Performance validation End to end user performance Validation Monitoring Optimization Application performance Netlogger Funded participants: UvA (2FTE), CERN(0.6FTE) 11/12/2018 The DataTAG Project
33
WP4 Workplan (Antonia Ghiselli & Cristina Vistoli / INFN)
Main Subject: Interoperability between EU and US Grids services from DataGrid, GriPhyN, PPDG and in collaboration with iVDGL, for the HEP applications. Objectives: Produce an assessment of interoperability solutions Provide test environment to LHC Applications to extend existing use-cases to test interoperability of the grid components Provide input to a common Grid LHC architecture Plan EU-US Integrated grid deployment Funded participants: INFN (6FTE), PPARC (1FTE), UvA (1FTE) 11/12/2018 The DataTAG Project
34
WP4 Tasks Assuming the same grid basic services (gram,gsi,gris)
between the differen grid projects, the main issues are: 4.1 resource discovery, coord. C.Vistoli 4.2 security/authorization, coord. R.Cecchini 4.3 interoperability of collective services between EU-US grid domains, coord. F.Donno 4.4 test applications, contact people from each application : Atlas / L.Perini, CMS / C.Grandi, Alice / P.Cerello 11/12/2018 The DataTAG Project
35
DataTAG/WP4 framework and relationships
Grid projects: DataGrid PPDG Griphyn LCG Globus Condor ….. input feedback ….. Grid Interoperability Activities: DataTAG/WP4 iVDGL HICB/HIJTB GGF Integration stardardization Proposals ….. Applications: LHC experiments CDF Babar ESA 11/12/2018 The DataTAG Project
36
WP4.1 - Resource Discovery Objectives
Enabling an interoperable system that allows for the discovery and access of the Grid services available at participant sites of all Grid domains, in particular between EU and US Grids. Compatibility of the Resource Discovery System with the existent components/services of the available GRID systems. 11/12/2018 The DataTAG Project
37
Task 4.1 Time Plan Reference agreement document on resource discovery schema by 31st of May 2002 “INTERGRID VO” MDS test by 31st of July 2002 Evaluation of the interoperability of multiple Resource Discovery Systems (FTree, MDS, etc…) by 30th of September 2002 Network Element by 31st of December 2002 Impact of the new Web Services Technology by 31st of June 2003 Identify missing components. Final deployment. by 31st of December 2003 11/12/2018 The DataTAG Project
38
WP4.2 - Objectives Identify Authentication, Authorization and Accounting (AAA) mechanisms allowing interoperability between grids Compatibility of the AAA mechanisms with the existing components/services of the available GRID systems. 11/12/2018 The DataTAG Project
39
Task 4.2 Time Plan Reference document Deployment Issues
Minimum requirements for DataTAG CA’s; Analysis of available authorization tools and policy languages and their suitability (in cooperation with the DataGrid Authorization WG); Mapping of the policies of the VO domains; Information exchange protocol between the authorization systems; Feasibility study of an accounting system (in cooperation with the DataGrid WP1); First draft: 31 July 2002 Final version: 30 April 2003 Deployment First: 30 September 2002 Final: 31 December 2003 11/12/2018 The DataTAG Project
40
WP4.3 / WP4.4 - Objectives Identify grid elements in EU and US grid projects, Identify common components in the testbeds used by the HEP experiments for semi-production activities in EU and US and classify them in an architectural framework. Plan and Setup “InterGrid VO” environment with common EU-US services. Deployment of an EU-US VO domain in collaboration with iVDGL. 11/12/2018 The DataTAG Project
41
Task 4.3/4.4 Time Plan The time plan will follow the schedule of each experiment Study of exp. Layout and Classification – first result by 31st of June 2002 First deployment (already started) by 31st of September 2002 First report of integration and interoperability issues by 30th of December 2002 First working version of a VO EU-US domain by 31st of June 2003 Complete deployment. by 31st of December 2003 11/12/2018 The DataTAG Project
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.