CGW03, Crakow, 28 October 2003 DataTAG Project Update CGW’2003 workshop, Crakow (Poland) October 28, 2003 Olivier Martin, CERN, Switzerland.

Slides:



Advertisements
Similar presentations
Computer Networks TCP/IP Protocol Suite.
Advertisements

Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Technology and Administrative Coordination Issues Pacific Rim Networking Workshop Guy Almes Manoa Valley, Oahu 22 February 2002.
Indiana University Global NOC Chris Robb The Hybrid Packet and Optical Initiative as a Connectivity Solution Presented to the APAN NOC & Resource Allocation.
Abilene and Internet2 Engineering Update Guy Almes Terena Networking Conference 2002 Limerick, Ireland Guy Almes Terena Networking Conference 2002 Limerick,
Internet2 Infrastructure. An advanced networking consortium whose members include: – 221 U.S. universities – 45 leading corporations – 66 government agencies,
EU DataGrid progress Fabrizio Gagliardi EDG Project Leader
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Storage System Integration with High Performance Networks Jon Bakken and Don Petravick FNAL.
Identifying MPLS Applications
M A Wajid Tanveer Infrastructure M A Wajid Tanveer
Internet2 IPv6 Update Matt Zekauskas, APAN IPv6 Task Force 2007-August-28.
The International Grid Testbed: a 10 Gigabit Ethernet success story in memoriam Bob Dobinson GNEW 2004, Geneva Catalin Meirosu on behalf of the IGT collaboration.
Tiziana FerrariWP2.3 Advance Reservation Demonstration: Description and set-up 1 WP2.3 Advance Reservation Demonstration: Description and set-up DRAFT,
Tiziana FerrariWP2.3 Advance Reservation Demonstration: Description and set-up 1 WP2.3 Advance Reservation Demonstration: Description and set-up DRAFT,
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
MPLS networking at PSP Co Multi-Protocol Label Switching Presented by: Hamid Sheikhghanbari 1.
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
High-speed TCP  FAST TCP: motivation, architecture, algorithms, performance (by Cheng Jin, David X. Wei and Steven H. Low)  Modifying TCP's Congestion.
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
GNEW’2004 – 15/03/2004 DataTAG project Status & Perspectives Olivier MARTIN - CERN GNEW’2004 workshop 15 March 2004, CERN, Geneva.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
TERENA Networking Conference, Zagreb, Croatia, 21 May 2003 High-Performance Data Transport for Grid Applications T. Kelly, University of Cambridge, UK.
Project Results Thanks to the exceptional cooperation spirit between the European and North American teams involved in the DataTAG project,
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
. Large internetworks can consist of the following three distinct components:  Campus networks, which consist of locally connected users in a building.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Chapter 11.4 END-TO-END ISSUES. Optical Internet Optical technology Protocol translates availability of gigabit bandwidth in user-perceived QoS.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
DataTAG overview. 3 February 2003Paolo Moroni (Slide 2) APM meeting - Barcelona Summary  Why DataTAG?  DataTAG project  Test-bed extensions  General.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
TESTBED (Technical)
SURFnet6: the Dutch hybrid network initiative
Grid related projects CERN openlab LCG EDG F.Fluckiger
Networking between China and Europe
CERN-USA connectivity update DataTAG project
The DataTAG Project Olivier H. Martin
CEOS workshop on Grid computing Slides mainly from Olivier Martin
DataTAG Project update
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
Wide Area Networking at SLAC, Feb ‘03
Presented at the GGF3 conference 8th October Frascati, Italy
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Presentation at University of Twente, The Netherlands
The DataTAG Project UCSD/La Jolla, USA Olivier H. Martin / CERN
Wide-Area Networking at SLAC
Internet2 Spring Member meeting
Presented at the 4th DataGrid Conference
High-Performance Data Transport for Grid Applications
Abilene Update Rick Summerhill
Presentation transcript:

CGW03, Crakow, 28 October 2003 DataTAG Project Update CGW’2003 workshop, Crakow (Poland) October 28, 2003 Olivier Martin, CERN, Switzerland

DataTAG partners

Funding agencies Cooperating Networks

DataTAG Mission  EU  US Grid network research High Performance Transport protocols Inter-domain QoS Advance bandwidth reservation  EU  US Grid Interoperability  Sister project to EU DataGRID T rans A tlantic G rid

CGW03, Crakow, 28 October Main DataTAG achievements (EU-US Grid interoperability)  GLUE Interoperability effort with DataGrid, iVDGL & Globus  GLUE testbed & demos  VOMS design and implementation in collaboration with DataGrid  VOMS evaluation within iVDGL underway  Integration of GLUE compliant components in DataGrid and VDT middleware

CGW03, Crakow, 28 October Main DataTAG achievements (Advanced networking)  Internet landspeed records have been beaten one after the other by DataTAG project members and/or teams closely associated with DataTAG:  Atlas Canada lightpath experiment (iGRID2002)  New Internet2 landspeed record (I2 LSR) by Nikhef/Caltech team (SC2002)  Scalable TCP, HSTCP, GridDT & FAST experiments (DataTAG partners & Caltech)  Intel 10GigE tests between CERN (Geneva) and SLAC (Sunnyvale) – (Caltech, CERN, Los Alamos NL, SLAC)  New I2LSR (Feb 27-28, 2003): 2.38Gb/s sustained rate, single TCP/IP v4 flow, 1TB in one hour  Caltech-CERN  Latest IPv4 & IPv6 I2LSR were awarded live from Indianapolis during Telecom World 2003:  May 6, 2003: 987 Mb/s single TCP/IP v6 stream  Oct 1, 2003, 5.44 Gb/s sustained rate, single TCP/IP v4 stream, 1.1TB in 26 minutes -> 1 680MB CD/second

CGW03, Crakow, 28 October Significance of I2LSR to the Grid?  Essential to establish the feasibility of multi-Gigabit/second single stream IPv4 & IPv6 data transfers:  Over dedicated testbeds in a first phase  Then across academic & research backbones  Last but not least across campus network  Disk to disk rather than memory to memory  Study impact of high performance TCP over disk servers  Next steps:  Above 6Gb/s expected soon between CERN and Los Angeles (Caltech/CENIC PoP) across DataTAG & Abilene  Goal is to reach 10Gb/s with new PCI Express buses  Study alternatives to standard TCP  Non-TCP transport  HSTCP, FAST, Grid-DT, etc…

CGW03, Crakow, 28 October Impact of high performance flows across A&R backbones? Possible solutions: Use of “TCP friendly” non-TCP (i.e. UDP) transport Use of Scavenger (i.e. less than best effort) services

CGW03, Crakow, 28 October DataTAG testbed overview (phase 1/2.5G & phase2/10G)

Layer1/2/3 networking (1)  Conventional layer 3 technology is no longer fashionable because of:  High associated costs, e.g. 200/300 KUSD for a 10G router interfaces  Implied use of shared backbones  The use of layer 1 or layer 2 technology is very attractive because it helps to solve a number of problems, e.g.  1500 bytes Ethernet frame size (layer1)  Protocol transparency (layer1&2)  Minimum functionality hence, in theory, much lower costs (layer1&2)

Layer1/2/3 networking (2)  So called, « lambda Grids » are becoming very popular,  Pros :  circuit oriented model like the telephone network, hence no need for complex transport protocols  Lower equipment costs (i.e. typically a factor 2 or 3 per layer)  the concept of a dedicated end to end light path is very elegant  Cons :  « End to end » still very loosely defined, i.e. site to site, cluster to cluster or really host to host  High cost, Scalability & Additional required middleware to deal with circuit set up, etc

CGW03, Crakow, 28 October Multi vendor 2.5Gb/s layer 2/3 testbed GigE switch Routers L2 Servers A-7770 C-7606 J-M10 GigE switch L3 Servers A1670 Multiplexer 2*GigE To STARLIGHT From CERN Ditto C-ONS15454 GEANT VTHD Abilene ESNet Canarie Layer 3 Layer 2 Layer 1 2.5G 2.5G GARR INRIA INFN/CNAF 10G CERN UvA 8*GigE STARLIGHT PPARC Super- Janet P-8801

State of 10G deployment and beyond  Still little deployed, because of lack of demand, hence:  Lack of products  High costs, e.g. 150KUSD for a 10GigE port on a Juniper T320 router  Even switched, layer 2, 10GigE ports are expensive, however the prices should come down to 10KUSD/port towards the end of  40G deployment, although more or less technologically ready, is unlikely to happen in the near future, i.e. before LHC starts

10G DataTAG testbed extension to Telecom World 2003 and Abilene/Cenic Sponsors: Cisco, HP, Intel, OPI (Geneva’s Office for the Promotion of Industries & Technologies), Services Industriels de Geneve, Telehouse Europe, T-Systems On September 15, 2003, the DataTAG project was the first transatlantic testbed offering direct 10GigE access using Juniper’s VPN layer2/10GigE emulation.

NEC’2003 Conference, Varna (Bulgaria) 19 September Impediments to high E2E throughput across LAN/WAN infrastructure For many years the Wide Area Network has been the bottlemeck, this is no longer the case in many countries thus, in principle, making the deployment of data intensive Grid infrastructure possible!  Recent I2LSR records show for the first time ever that the network can be truly transparent and that throughputs are limited by the end hosts The dream of abundant bandwith has now become a reality in large, but not all, parts of the world! Challenge shifted from getting adequate bandwidth to deploying adequate LANs and cybersecurity infrastructure as well as making effective use of it! Major transport protocol issues still need to be resolved, however there are many encouraging signs that practical solutions may now be in sight.

Single TCP stream performance under periodic losses Loss rate =0.01%: è LAN BW utilization= 99% è WAN BW utilization=1.2% Bandwidth available = 1 Gbps u TCP throughput is much more sensitive to packet loss in WANs than in LANs r TCP’s congestion control algorithm (AIMD) is not suited to gigabit networks r Poor limited feedback mechanisms r The effect of even very small packet loss rates is disastrous u TCP is inefficient in high bandwidth*delay networks u The future performance of data intensive grids looks grim if we continue to rely on the widely-deployed TCP RENO stack