Presented at the GGF3 conference 8th October Frascati, Italy

Slides:



Advertisements
Similar presentations
University of Illinois at Chicago The Future of STAR TAP: Enabling e-Science Research Thomas A. DeFanti Principal Investigator, STAR TAP Director, Electronic.
Advertisements

StarLight, TransLight And the Global Lambda Integrated Facility (GLIF) Tom DeFanti, Dan Sandin, Maxine Brown, Jason Leigh, Alan Verlo, University of Illinois.
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
Storage System Integration with High Performance Networks Jon Bakken and Don Petravick FNAL.
StarLight Located in Abbott Hall, Northwestern University’s Chicago Campus Operational since summer 2001, StarLight is a 1GigE and 10GigE switch/router.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
CAIDA Bandwidth Estimation Meeting San Diego June 2002 R. Hughes-Jones Manchester 1 EU DataGrid - Network Monitoring Richard Hughes-Jones, University of.
Transport Level Protocol Performance Evaluation for Bulk Data Transfers Matei Ripeanu The University of Chicago Abstract:
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
KEK Network Qi Fazhi KEK SW L2/L3 Switch for outside connections Central L2/L3 Switch A Netscreen Firewall Super Sinet Router 10GbE 2 x GbE IDS.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
1 High performance Throughput Les Cottrell – SLAC Lecture # 5a presented at the 26 th International Nathiagali Summer College on Physics and Contemporary.
BIC Control for Fast Long-Distance Networks paper written by Injong Rhee, Lisong Xu & Khaled Harfoush (2004) Presented by Jonathan di Costanzo (2009/02/18)
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
University of Illinois at Chicago StarLight: Applications-Oriented Optical Wavelength Switching for the Global Grid at STAR TAP Tom DeFanti, Maxine Brown.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
1 Evaluating NGI performance Matt Mathis
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
May Global Terabit Research Network: Building Global Cyber Infrastructure Michael A. McRobbie Vice President for Information Technology & CIO Indiana.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
TransLight Tom DeFanti 50 years ago, 56Kb USA to Netherlands cost US$4.00/minute Now, OC-192 (10Gb) costs US$2.00/minute* That’s 400,000 times cheaper.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
Advanced Network Diagnostic Tools Richard Carlson EVN-NREN workshop.
“A Data Movement Service for the LHC”
Internet Networking recitation #9
INFN-GRID Work Package 5 - NETWORK
TESTBED (Technical)
Maxine Brown, Tom DeFanti, Joe Mambretti
G R N E T-2 Update Tryfon Chiotis, Technical Director
SURFnet6: the Dutch hybrid network initiative
Realization of a stable network flow with high performance communication in high bandwidth-delay product network Y. Kodama, T. Kudoh, O. Tatebe, S. Sekiguchi.
Efficient utilization of 40/100 Gbps long-distance network
R. Hughes-Jones Manchester
Networking between China and Europe
Transport Protocols over Circuits/VCs
Networking for grid Network capacity Network throughput
WP7 objectives, achievements and plans
TCP-LP Distributed Algorithm for Low-Priority Data Transfer
CERN-USA connectivity update DataTAG project
The DataTAG Project Olivier H. Martin
CERN external networking update & DataTAG project
CEOS workshop on Grid computing Slides mainly from Olivier Martin
The DataTAG Project Olivier H. Martin CERN - IT Division
DataTAG Project update
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
Wide Area Networking at SLAC, Feb ‘03
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Internet2 101 US R&E Networking From the ARPANET to Today
Presentation at University of Twente, The Netherlands
The DataTAG Project UCSD/La Jolla, USA Olivier H. Martin / CERN
Update on International Connections and Internet2
Wide-Area Networking at SLAC
Internet2 Spring Member meeting
Presented at the 4th DataGrid Conference
High-Performance Data Transport for Grid Applications
LHC Computing, RRB; H F Hoffmann
Presentation transcript:

Presented at the GGF3 conference 8th October Frascati, Italy The EU DataTAG Project Presented at the GGF3 conference 8th October Frascati, Italy Olivier H. Martin CERN - IT Division

The EU DataTAG project Two main focus: Grid applied network research Interoperability between Grids 2.5 Gbps transatlantic lambda between CERN (Geneva) and StarLight (Chicago) dedicated to research (no production traffic) Expected outcomes: Hide complexity of Wide Area Networking Better interoperability between GRID projects in Europe and North America DataGrid, possibly other EU funded Grid projects PPDG, GriPhyN, DTF, iVDGL (USA) 11/23/2018 The EU DataTAG Project

The EU DataTAG project (cont) European partners: INFN (IT), PPARC (UK), University of Amsterdam (NL) and CERN, as project coordinator Significant contributions to the DataTAG workplan have been made by Jason Leigh (EVL@University of Illinois), Joel Mambretti (Northwestern University), Brian Tierney (LBNL). Strong collaborations already in place with ANL, Caltech, FNAL, SLAC, University of Michigan, as well as Internet2 and ESnet. The budget is 3.9 MEUR Expected starting date: December, 1, 2001 NSF support through the existing collaborative agreement with CERN (Eurolink award) 11/23/2018 The EU DataTAG Project

DataTAG project Abilene UK IT ESNET CERN GEANT MREN NL NewYork SuperJANET4 IT GARR-B STAR-LIGHT ESNET GEANT CERN MREN NL SURFnet STAR-TAP 11/23/2018 The EU DataTAG Project

DataTAG planned set up (second half 2002) DataTAG test equipment CERN PoP Chicago DataTAG test equipment DataTAG test equipment STARLIGHT UvA INFN PPARC …... DataTAG test equipment ESNET GEANT ABILENE 2.5 Gb DataGRID PPDG iVDGL CERN CIXP DTF GriPhyN DataTAG test equipment 11/23/2018 The EU DataTAG Project

DataTAG Workplan WP1: Provisioning & Operations (CERN) Will be done in cooperation with DANTE Two major issues: Procurement Routing, how can the DataTAG partners have transparent access to the DataTAG circuit across GEANT and their national network? WP5: Information dissemination and exploitation (CERN) WP6: Project management (CERN) 11/23/2018 The EU DataTAG Project

DataTAG Workplan (cont) WP2: High Performance Networking (PPARC) High performance Transport tcp/ip performance over large bandwidth*delay networks Alternative transport solutions End to end inter-domain QoS Advance network resource reservation 11/23/2018 The EU DataTAG Project

DataTAG Workplan (cont) WP3: Bulk Data Transfer & Application performance monitoring (UvA) Performance validation End to end user performance Validation Monitoring Optimization Application performance Netlogger 11/23/2018 The EU DataTAG Project

DataTAG Workplan (cont) WP4: Interoperability between Grid Domains (INFN) GRID resource discovery Access policies, authorization & security Identify major problems Develop inter-Grid mechanisms able to interoperate with domain specific rules Interworking between domain specific Grid services Test Applications Interoperability, performance & scalability issues 11/23/2018 The EU DataTAG Project

DataTAG Planning details The lambda availability is expected in the second half of 2002 Initially, test systems will be either at CERN or connect via GEANT GEANT is expected to provide VPNs (or equivalent) for Datagrid and/or access to the GEANT PoPs. Later, it is hoped that GEANT will provide dedicated lambdas for Datagrid Initially a 2.5 Gb/sec POS link WDM later, depending on equipment availability 11/23/2018 The EU DataTAG Project

11/23/2018 The EU DataTAG Project At the STAR TAP but not yet on the map is: KREONet2 (Korea) Newcomers we expect in the next few months are: ANSP (Brazil - Sao Paulo R&E network) RNP (Brazil - country-wide R&E network) HEANET (Ireland R&E network) STAR LIGHT a project to run Lambda’s between participants to a meet point in Chicago is also underway 11/23/2018 The EU DataTAG Project

The STAR LIGHT Next generation STAR TAP with the following main distinguishing features: Neutral location (Northwestern University) 1/10 Gigabit Ethernet based Multiple local loop providers Optical switches for advanced experiments The STAR LIGHT will provide 2*622 Mbps ATM connection to the STAR TAP Started in July 2001 Also hosting other advanced networking projects in Chicago & State of Illinois N.B. Most European Internet Exchanges Points have already been implemented along the same lines. 11/23/2018 The EU DataTAG Project

StarLight Infrastructure …Soon, Star Light will be an optical switching facility for wavelengths 11/23/2018 The EU DataTAG Project

Evolving StarLight Optical Network Connections Asia-Pacific SURFnet, CERN Vancouver CA*net4 CA*net4 Seattle Portland U Wisconsin Chicago* NYC PSC San Francisco IU DTF 40Gb NCSA Asia-Pacific Caltech Atlanta SDSC *ANL, UIC, NU, UC, IIT, MREN AMPATH 11/23/2018 The EU DataTAG Project

Multiple Gigabit/second networking Facts, Theory & Practice (1) Gigabit Ethernet (GBE) nearly ubiquitous 10GBE coming very soon 10Gbps circuits have been available for some time already in Wide Area Networks (WAN). 40Gbps is in sight on WANs, but what after? THEORY: 1GB file transferred in 11 seconds over a 1Gbps circuit (*) 1TB file transfer would still require 3 hours and 1PB file transfer would require 4 months (*) according to the 75% empirical rule 11/23/2018 The EU DataTAG Project

Multiple Gigabit/second networking Facts, Theory & Practice (2) Assuming suitable window size is use (i.e. bandwidth*RTT), the achieved throughput also depends on the packet size and the packet loss rate. This means that with non-zero packet loss rates, higher throughput will be achieved using Gigabit Ethernet “Jumbo Frames”. Could possibly conflict with strong security requirements in the presence of firewalls (e.g. throughput, transparency (e.g.TCP/IP window scaling option)) Single stream vs multi-stream Tuning the number of streams is probably as difficult as tuning single stream However, as explained later multi-stream are a very effective way to bypass the deficiencies of TCP/IP 11/23/2018 The EU DataTAG Project

Single stream vs Multiple streams (1) Why do multiple streams normally yield higher aggregate throughput than a single stream, in the presence of packet losses? Assume we have a 200ms RTT (e.g. CERN-Caltech) and a 10Gbps link The size of the window is computed according to the following formula: Window Size = Bandwidth*RTT (i.e. 250MB at 10Gbps & 200ms RTT): With no packet losses, one 10Gbps stream or two 5Gbps streams are equivalent, even though the CPU load on the end systems may not be the same. 11/23/2018 The EU DataTAG Project

Single stream vs Multiple streams (2) With one packet loss, the 10Gbps stream will reduce its window to 5Gbps and will then increase by one MSS (1500 bytes) per RTT, therefore the average rate during the congestion avoidance phase will be 7.5 Gbps, at best. With one packet loss and two 5Gbps streams, only one stream is affected and the congestion avoidance phase is shorter (i.e. almost half) because RTTs are hardly affected by the available bandwidth, so, the average rate will be 3.75Gbps, and the aggregate throughput will be 8.75Gbps, In addition the 10Gbps regime will be reached faster. 11/23/2018 The EU DataTAG Project

Single stream vs Multiple streams (3) effect of a single packet loss (e.g. link error, buffer overflow) Streams/Throughput 10 5 1 7.5 4.375 2 9.375 10 Avg. 7.5 Gbps Throughput Gbps 7 5 Avg. 6.25 Gbps Avg. 4.375 Gbps 5 2.5 Avg. 3.75 Gbps T = 2.37 hours! (RTT=200msec, MSS=1500B) T T T Time T 11/23/2018 The EU DataTAG Project

Single stream vs Multiple streams (4) effect of two packet losses (e.g. link error, buffer overflow) Streams/Throughput 10 5 1 6.25 4.583 2 9.166 10 Avg. 6.25 Gbps Throughput Gbps Avg. 8.75 Gbps 7 5 Avg. 6.25 Gbps 5 Avg. 4.375 Gbps 1 packet losses on two 5Gbps streams Avg. 4.583 Gbps 2.5 Avg. 3.75 Gbps T = 2.37 hours! (RTT=200msec, MSS=1500B) 2 packet losses on one 5Gbps stream T T T T Time 11/23/2018 The EU DataTAG Project

Multiple Gigabit/second networking (tentative conclusions) Are TCP's "congestion avoidance" algorithms compatible with high speed, long distance networks? The "cut transmit rate in half on single packet loss and then increase the rate additively (1 MSS by RTT)" algorithm, also called AIMD “additive increase, multiplicative decrease” may simply not work. New TCP/IP adaptations may be needed in order to better cope with “lfn”, e.g. TCP Vegas, but simpler changes can also be thought of. Non-Tcp/ip based transport solution, use of Forward Error Corrections (FEC), Early Congestion Notifications (ECN) rather than active queue management techniques (RED/WRED)? We should work closely with the Web100 & Net100 projects Web100 (http://www.web100.org/), a 3MUSD NSF project, might help enormously! better TCP/IP instrumentation (MIB) self-tuning tools for measuring performance improved FTP implementation Net100 http://www.net100.org/ (complementary, DoE funded, project) Development of network-aware operating systems 11/23/2018 The EU DataTAG Project