Download presentation
Presentation is loading. Please wait.
Published byTheodor Fertig Modified over 6 years ago
1
The DataTAG Project http://www.datatag.org/
Presentation at University of Twente, The Netherlands 17 September 2002 J.P. Martin-Flatin and Olivier H. Martin CERN, Switzerland
2
Project Partners EU-funded partners: CERN (CH), INFN (IT), INRIA (FR), PPARC (UK) and University of Amsterdam (NL) U.S.-funded partners: Caltech, UIC, UMich, Northwestern University, StarLight Associate partners: SLAC, ANL, FNAL, Canarie, etc. Project coordinator: CERN contact: 17 September 2002
3
Budget of EU Side EUR 3.98M Funded manpower: 15 FTE/year
21 FTE recruited Start date: January 1, 2002 Duration: 2 years 17 September 2002
4
Three Objectives Build a testbed to experiment with massive file transfers across the Atlantic High-performance protocols for gigabit networks underlying data-intensive Grids Interoperability between several major Grid projects in Europe and USA 17 September 2002
5
Grids DataTAG iVDGL GIIS giis.ivdgl.org GIIS edt004.cnaf.infn.it
mds-vo-name=glue Gatekeeper: Padova-site Grids GIIS edt004.cnaf.infn.it Mds-vo-name=‘Datatag’ LSF Resource Broker Gatekeeper: US-CMS GIIS giis.ivdgl.org mds-vo-name=ivdgl-glue Gatekeeper grid006f.cnaf.infn.it Condor Gatekeeper edt004.cnaf.infn.it Gatekeeper: US-ATLAS WN1 edt001.cnaf.infn.it WN2 edt002.cnaf.infn.it Computing Element-1 PBS Computing Element -2 Fork/pbs Gatekeeper LSF hamachi.cs.uchicago.edu dc-user.isi.edu rod.mcs.anl.gov Job manager: Fork DataTAG iVDGL 17 September 2002
6
Testbed 17 September 2002
7
Objectives Provisioning of 2.5 Gbit/s transatlantic circuit between CERN (Geneva) and StarLight (Chicago) Dedicated to research (no production traffic) Multi-vendor testbed with layer-2 and layer-3 capabilities: Cisco Alcatel Juniper Testbed open to other Grid projects Collaboration with GEANT 17 September 2002
8
2.5 Gbit/s Transatlantic Circuit
Operational since 20 August 2002 (T-Systems) Delayed by KPNQwest bankruptcy Routing plan developed for access across GEANT Circuit initially connected to Cisco 76xx routers (layer 3) High-end PC servers at CERN and StarLight: SysKonnect GbE can saturate the circuit with TCP traffic Layer-2 equipment deployment under way Full testbed deployment scheduled for 31 October 2002 17 September 2002
9
Why Yet Another 2.5 Gbit/s Transatlantic Circuit?
Most existing or planned 2.5 Gbit/s transatlantic circuits are for production not suitable for advanced networking experiments Need operational flexibility: deploy new equipment (routers, GMPLS-capable multiplexers), activate new functionality (QoS, MPLS, distributed VLAN) The only known exception to date is the Surfnet circuit between Amsterdam and Chicago (StarLight) 17 September 2002
10
Major R&D 2.5 Gbit/s circuits between Europe & USA
R&D Connectivity UK SuperJANET4 IT GARR-B New York 3*2.5Gbit/s Abilene FR INRIA GEANT Canarie StarLight ESNET NL SURFnet CH CERN MREN FR VTHD ATRIUM Major R&D 2.5 Gbit/s circuits between Europe & USA
11
Network Research 17 September 2002
12
DataTAG Activities Enhance TCP performance Monitoring QoS
modify Linux kernel Monitoring QoS LBE (Scavenger) Bandwidth reservation AAA-based bandwidth on demand lightpath managed as a Grid resource 17 September 2002
13
TCP Performance Issues
TCP’s current congestion control (AIMD) algorithms are not suited to gigabit networks long time to recover from packet loss Line errors are interpreted as congestion Delayed ACKs + large window size + large RTT = problem 17 September 2002
14
Single vs. Multiple Streams: Effect of a Single Packet Loss
Streams/Throughput 10 5 10 Avg. 7.5 Gbps Throughput in Gbit/s 7 5 Avg Gbps Avg Gbps 5 2.5 Avg Gbps T =~ 45 min! (RTT=120ms, MSS=1500bytes) T T T T Time 17 September 2002
15
Responsiveness Capacity RTT # inc Responsiveness 9.6 kbit/s 40 ms 1
10 Mbit/s 20 ms 8 150 ms 622 Mbit/s 120 ms ~2,900 ~6 min 2.5 Gbit/s ~11,600 ~23 min 10 Gbit/s ~46,200 ~1h 30min 17 September 2002 inc size = MSS = 1,460
16
Research Directions New fairness principle
Change multiplicative decrease: do not divide by two Change additive increase binary search local and global stability Caltech technical report CALT Estimation of the available capacity and bandwidth*delay product: on the fly cached 17 September 2002
17
Grid Interoperability
17 September 2002
18
Objectives Interoperability between European and US Grids
Middleware integration and coexistence GLUE = Grid Lab Uniform Environment integration & standardization testbed and demo Enable a set of applications to run on the transatlantic testbed: CERN LHC experiments: ATLAS, CMS, Alice other experiments: CDF, D0, BaBar, Virgo, Ligo, etc. 17 September 2002
19
GLUE Relationships Integration Interoperability GriPhyN PPDG iVDGL
HEP applications, other experiments Integration GriPhyN PPDG iVDGL HICB/HIJTB Interoperability standardization GLUE 17 September 2002
20
Interoperability Framework
Grid Resources Core Services Optimization Services must be common to all Grids must coexist CE, SE, NE 17 September 2002
21
Grid Software Architecture
Scheduler Grid Request Submission Replica Management Resource Discovery Security Policy Access Protocols Information Protocols Compute Service Storage Service Network Service Catalog Service Computers Storage Systems Network Devices 17 September 2002
22
Status of GLUE Activities
Resource discovery and GLUE schema computing element storage element network element Authentication across organizations Minimal authorization Unified service discovery Common software deployment procedures 17 September 2002
23
Resource Discovery and GLUE Schema
Computing Resources Structure Description computing element cluster sub-cluster representation host Entry point into queuing system Container groups subcluster or nodes Homogeneous collection of nodes Physical computing nodes 17 September 2002
24
Future GLUE Activities
Data movement: GridFTP replica location service Advanced authorization: cross-organization, community-based authorization 17 September 2002
25
Demos iGrid 2002 IST 2002 SC 2002 US16 with University of Michigan
US14 with Caltech and ANL CA03 with Canarie IST 2002 SC 2002 17 September 2002
26
Summary Gigabit testbed for data-intensive Grids:
Layer 3 in place Layer 2 being provisioned Modified version of TCP to improve performance Grid interoperability: GLUE schema for resource discovery Working on common authorization solutions Evaluation of software deployment tools First interoperability tests on heterogeneous transatlantic testbeds 17 September 2002
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.