GNEW’2004 – 15/03/2004 DataTAG project Status & Perspectives Olivier MARTIN - CERN GNEW’2004 workshop 15 March 2004, CERN, Geneva.

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

Why Optical Networks Are Emerging as the 21 st Century Driver Scientific American, January 2001.
The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
CGW03, Crakow, 28 October 2003 DataTAG Project Update CGW’2003 workshop, Crakow (Poland) October 28, 2003 Olivier Martin, CERN, Switzerland.
The International Grid Testbed: a 10 Gigabit Ethernet success story in memoriam Bob Dobinson GNEW 2004, Geneva Catalin Meirosu on behalf of the IGT collaboration.
Tiziana FerrariWP2.3 Advance Reservation Demonstration: Description and set-up 1 WP2.3 Advance Reservation Demonstration: Description and set-up DRAFT,
Tiziana FerrariWP2.3 Advance Reservation Demonstration: Description and set-up 1 WP2.3 Advance Reservation Demonstration: Description and set-up DRAFT,
Optical networking research in Amsterdam Paola Grosso UvA - AIR group.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
Optical Networking Status of Discussion in the UK Richard Hughes-Jones The University of Manchester Particle Physics Network Coordination Group TERENA.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
Large File Transfer on 20,000 km - Between Korea and Switzerland Yusung Kim, Daewon Kim, Joonbok Lee, Kilnam Chon
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY DOE-UltraScience Net (& network infrastructure) Update JointTechs Meeting February 15, 2005.
Connect. Communicate. Collaborate VPNs in GÉANT2 Otto Kreiter, DANTE UKERNA Networkshop 34 4th - 6th April 2006.
LambdaGRID the NREN (r)Evolution Kees Neggers Managing Director SURFnet Reykjavik, 26 August 2003.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
Technology for Using High Performance Networks or How to Make Your Network Go Faster…. Robin Tasker UK Light Town Meeting 9 September.
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
Using E2E technology for LHC Apr 3, 2006 HEPiX Spring Meeting 2006
Thoughts on Future LHCOPN Some ideas Artur Barczyk, Vancouver, 31/08/09.
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
GigaPort NG Network SURFnet6 and NetherLight Kees Neggers SURFnet Amsterdam October 12, 2004.
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
Internet data transfer record between CERN and California Sylvain Ravot (Caltech) Paolo Moroni (CERN)
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
STAR TAP, Euro-Link, and StarLight Tom DeFanti April 8, 2003.
Project Results Thanks to the exceptional cooperation spirit between the European and North American teams involved in the DataTAG project,
NORDUnet Nordic Infrastructure for Research & Education Workshop Introduction - Finding the Match Lars Fischer LHCONE Workshop CERN, December 2012.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
29/1/2002A.Ghiselli, INFN-CNAF1 DataTAG / WP4 meeting Cern, 29 January 2002 Agenda  start at  Project introduction, Olivier Martin  WP4 introduction,
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
GNEW2004 CERN March 2004 R. Hughes-Jones Manchester 1 Lessons Learned in Grid Networking or How do we get end-2-end performance to Real Users ? Richard.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
TransLight Tom DeFanti 50 years ago, 56Kb USA to Netherlands cost US$4.00/minute Now, OC-192 (10Gb) costs US$2.00/minute* That’s 400,000 times cheaper.
Javier Orellana JRA4 Coordinator Face to Face Partners Meeting University College London 11 December 2003 EGEE is proposed as a project funded by the European.
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
DataTAG overview. 3 February 2003Paolo Moroni (Slide 2) APM meeting - Barcelona Summary  Why DataTAG?  DataTAG project  Test-bed extensions  General.
Javier Orellana EGEE-JRA4 Coordinator CERN March 2004 EGEE is proposed as a project funded by the European Union under contract IST Network.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology US LHCNet LHCNet WG September 12 th 2006.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
TESTBED (Technical)
SURFnet6: the Dutch hybrid network initiative
R. Hughes-Jones Manchester
Networking between China and Europe
Networking for grid Network capacity Network throughput
WP7 objectives, achievements and plans
CERN-USA connectivity update DataTAG project
The DataTAG Project Olivier H. Martin
CEOS workshop on Grid computing Slides mainly from Olivier Martin
DataTAG Project update
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
Wide Area Networking at SLAC, Feb ‘03
Presented at the GGF3 conference 8th October Frascati, Italy
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Presentation at University of Twente, The Netherlands
The DataTAG Project UCSD/La Jolla, USA Olivier H. Martin / CERN
Internet2 Spring Member meeting
Presented at the 4th DataGrid Conference
High-Performance Data Transport for Grid Applications
Optical Networking Activities in NetherLight
Presentation transcript:

GNEW’2004 – 15/03/2004 DataTAG project Status & Perspectives Olivier MARTIN - CERN GNEW’2004 workshop 15 March 2004, CERN, Geneva

Final DataTAG Review, 24 March March 15, Presentation outline  Project overview  Testbed characteristics and evolution  Major networking achievements  Where are we?  Lambda Grids  Networking testbed requirements  Acknowledgements  Conclusions

Final DataTAG Review, 24 March March 15, DataTAG Mission  EU  US Grid network research  High Performance Transport protocols  Inter-domain QoS  Advance bandwidth reservation  EU  US Grid Interoperability  Sister project to EU DataGRID T rans A tlantic G rid

Final DataTAG Review, 24 March March 15, Project partners

Final DataTAG Review, 24 March March 15, Funding agencies Cooperating Networks

Final DataTAG Review, 24 March March 15, EU collaborators  Brunel University  CERN  CLRC  CNAF  DANTE  INFN  INRIA  NIKHEF  PPARC  UvA  University of Manchester  University of Padova  University of Milano  University of Torino  UCL

Final DataTAG Review, 24 March March 15, US collaborators  ANL  Caltech  Fermilab  FSU  Globus  Indiana  Wisconsin  Northwestern University  UIC  University of Chicago  University of Michigan  SLAC  Starlight

Final DataTAG Review, 24 March March 15, Workplan  WP1:  Establishment of a high performance intercontinental Grid testbed (CERN)  WP2:  High performance networking (PPARC)  WP3  Bulk data transfer validations and application performance monitoring (UvA)  WP4  Interoperability between Grid domains (INFN)  WP5 & WP6  Dissemination and project management (CERN)

Final DataTAG Review, 24 March March 15, Integration DataTAG/WP4 framework and relationships DataTAG/WP4 framework and relationships HICB/HIJTB Interoperabilitystandardization HEP applications, Other experiments

Final DataTAG Review, 24 March March 15, Testbed evolution  The DataTAG testbed evolved from a simple 2.5 Gb/s Layer3 testbed (Sept. 2002) into an extremely rich multi- vendor 10 Gb/s Layer2/Layer3 testbed (Sept. 2003)  Alcatel, Chiaro, Cisco, Juniper, PRocket  Exclusive access to the testbed is granted through an advance testbed reservation application  Direct extensions to Amsterdam UvA/Surfnet (10G) & Lyon INRIA/VTHD (2.5G)  Layer 2 extension to INFN/CNAF over GEANT & GARR using Juniper’s CCC  Layer 2 extension to the OptiPuter project at UCSD (University of California San Diego) through Abilene and CENIC under way.  1 st L2/L3 Transatlantic testbed with native 10Gigabit Ethernet access.

Final DataTAG Review, 24 March March 15, Cisco7606 r04chi Cisco7609 stm16 (T-Systems) r05chi-JuniperM10 r06chi-Alcatel7770 r05gva-JuniperM10 r06gva Alcatel7770 SURF NET stm16(Colt) backup+projects s01chi Extreme S5i VTHD/INRIA stm16 (FranceTelecom) Chicago Geneva ONS15454 Alcatel 1670 SURFNET CESNET ONS15454 stm64 (GC) CNAF GEANT Linux PCs STM64 last update: Linux PCs Juniper T320 Linux PCs JuniperM10 GEANT Cisco7609 Linux PCs StarLight Cisco6509 StarLight Force10 ABILENE 1G ethernet 2.5G STM16 10G ethernet 10Gbps Optical wave (T-Systems) VTHD/INRIA Alcate l G STM64 DataTAG testbed phase 1 (2.5Gbps) DataTAG testbed phase 2 (10Gbps) simplified

Final DataTAG Review, 24 March March 15,

Final DataTAG Review, 24 March March 15, DataTAG testbed Alcatel Chiaro Cisco Juniper PRocket

Final DataTAG Review, 24 March March 15, Main networking achievements (1)  Internet landspeed records have been beaten one after the other by the DataTAG project partners and/or teams closely associated with DataTAG:  Atlas Canada lightpath experiments during iGRID2002 (Gigabit Ethernet) and Telecom World 2003 (10Gigabit Ethernet, aka WAN- PHY)  New Internet2 landspeed record (I2 LSR) by Nikhef/Caltech team (SC2002)  FAST, GridDT, HS-TCP, Scalable TCP experiments (DataTAG partners & Caltech)  Intel 10GigE tests between CERN (Geneva) and SLAC (Sunnyvale) (CERN, Caltech, Los Alamos Nationa Laboratory, SLAC)  2.38 Gbps sustained rate, single flow, 1TB in one hour  I2 LSR awarded during Internet2 Spring member meeting (April 2003)

Final DataTAG Review, 24 March March 15, ATLAS Canada Lightpath trials TRIUMF Vancouver & CERN Geneva through Amsterdam CANARIE 2xGbE circuits StarLight SURFnet 2xGbE circuits NetherLight “A full Terabyte of real data was transferred at rates equivalent to a full CD (680MB) in under 8 seconds and a DVD in under 1 minute” Wade Hong et al 09/2002 Subsequent 10GigE WAN-PHY Experiments during Telecom World 2003 Bringing effective data transfer rates below one second per CD!

March 15, On Feb , a terabyte of data was transferred in 3700 seconds by S. Ravot of Caltech between the Level3 PoP in Sunnyvale near SLAC and CERN through the TeraGrid router at StarLight from memory to memory with a single TCP/IPv4 stream. This achievement translates to an average rate of 2.38 Gbps (using large windows and 9kB “jumbo frames”). This beat the former record by a factor of ~2.5 and used the 2.5Gb/s link at 99% efficiency. 10GigE Data Transfer Trial European Commission Huge distributed effort, highly skilled people monopolized for several weeks!

Final DataTAG Review, 24 March March 15, G DataTAG testbed extension to Telecom World 2003 and Abilene/Cenic Sponsors: Cisco, HP, Intel, OPI (Geneva’s Office for the Promotion of Industries & Technologies), Services Industriels de Geneve, Telehouse Europe, T-Systems On September 15, 2003, the DataTAG project was the first transatlantic testbed offering direct 10GigE access using Juniper’s VPN layer2/10GigE emulation.

Final DataTAG Review, 24 March March 15, Main networking achievements (2)  Latest IPv4 & IPv6 I2LSR were awarded, live from the Internet2 fall member meeting in Indianapolis, to Caltech & CERN during Telecom World 2003:  May 6, 2003:  987 Mb/s single TCP/IP v6 stream  October 1, 2003  5.44 Gb/s single TCP/IP v4 stream between Geneva and Chicago:  1.1TB in 26 minutes or one 680MB CD in 1 second  More records have been established by Caltech & CERN since then:  November 6, 2003:  5.64 Gb/s single TCP/IP v4 stream between Geneva and Los Angeles (CENIC PoP) across DataTAG and Abilene.  November 11, 2003,  4 Gb/s single TCP/IP v6 stream between Geneva and Phoenix (Arizona) through Los Angeles  February 24, 2004  6.25 Gb/s with 9 streams for 638 seconds, i.e. half a terabyte transferred between CERN in Geneva and the CENIC PoP in Los Angeles across DataTAG and Abilene.

Final DataTAG Review, 24 March March 15, Internet2 landspeed record history (IPv4&IPv6) Impact of a single multi- Gb/s flow on the Abilene backbone

Final DataTAG Review, 24 March March 15, Significance of I2LSRs to the Grid?  Essential to establish the feasibility of multi-Gigabit/second single stream IPv4 & IPv6 data transfers:  Over dedicated testbeds in a first phase  Then across academic & research backbones  Last but not least across campus networks  Disk to disk rather than memory to memory  Study impact of high performance TCP over disk servers  Next steps:  Above 6Gb/s expected soon between CERN and Los Angeles (Caltech/CENIC PoP) across DataTAG & Abilene  Goal is to reach 10Gb/s with new PCI Express buses  Study alternatives to standard TCP (Reno)  Non-TCP transport (Tsunami, SABUL/VDT)  HS-TCP, Scalable TCP, H-TCP, FAST, Grid-DT, Wesley+, etc…

Final DataTAG Review, 24 March March 15, Main networking achievements (3)  QoS Layer2: VLAN Layer2: VLAN Juniper M10 1 GE bottleneck IP-Qos configured Layer2: VLAN Layer2: VLAN AF Geneva BE  Advance bandwidth reservation  GARA extensions  AAA extensions

Final DataTAG Review, 24 March March 15, Where are we?  The DataTAG project came up at exactly the right time:  Back in the late 2000, 2.5 Gb/s looked futuristic  10GigE, especially host interfaces, did not really exist  However, it was already very clear that the standard TCP stack (Reno/Newreno) was problematic  Much hope was placed on autotuning (Web100/Net100) & ECN/RED like solutions  Actual bit error rates of transatlantic circuits were over-estimated  Much better shape than expected on over-provisioned R&D backbones such as Abilene, Canarie, GEANT  For how long?  One of the strongest proof made by DataTAG is the extreme vulnerability of production R&D backbones in the presence of high performance flows (i.e. 10GigE or even less)

Final DataTAG Review, 24 March March 15, Where are we (cont)?  For many years the Wide Area Network has been the bottlemeck, this is no longer the case in many countries, thus making the deployment of data intensive Grid infrastructure, in principle, possible, e.g.  EGEE the DataGrid successor  Recent I2LSR records show, for the first time ever, that the network can be truly transparent and that throughput is only limited by the end hosts and/or campus network infrastructures.  Challenge shifted from getting adequate bandwidth to deploying adequate LANs and cybersecurity infrastructure as well as making effective use of it!  Non-trivial transport protocol issues still need to be resolved  The only encouraging sign is that this is now widely recognized  But we are still quite far from converging on a practical solution?

Final DataTAG Review, 24 March March 15, Layer1/2/3 networking (1)  Conventional layer 3 technology is no longer fashionable because of:  High associated costs, e.g. 200/300 KUSD for a 10G router interfaces  Implied use of shared backbones  The use of layer 1 or layer 2 technology is very attractive because it helps to solve a number of problems, e.g.  1500 bytes Ethernet frame size (layer1)  Protocol transparency (layer1 & layer2)  Minimum functionality hence, in theory, much lower costs (layer1&2)

Final DataTAG Review, 24 March March 15, Layer1/2/3 networking (2)  « Lambda Grids » are becoming very popular:  Pros:  circuit oriented model like the telephone network, hence no need for complex transport protocols  Lower equipment costs (i.e. « in theory » a factor 2 or 3 per layer)  the concept of a dedicated end to end light path is very elegant  Cons:  « End to end » still very loosely defined, i.e. site to site, cluster to cluster or really host to host  Higher circuit costs, Scalability, Additional middleware to deal with circuit set up/tear down, etc  Extending dynamic VLAN functionality to the campus network is a potential nightmare!

Final DataTAG Review, 24 March March 15, « Lambda Grids » What does it mean?  Clearly different things to different people, hence the « apparently easy » consensus!  Conservatively, on demand « site to site » connectivity  Where is the innovation?  What does it solve in terms of transport protocols?  Where are the savings?  Less interfaces needed (customer) but more standby/idle circuits needed (provider)  Economics from the service provider vs the customer perspective? »Traditionally, switched services have been very expensive, Usage vs flat charge Break even, switches vs leased, few hours/day Why would this change?  In case there are no savings, why bother?  More advanced, cluster to cluster  Implies even more active circuits in parallel  Even more advanced, Host to Host  All optical  Is it realisitic?

Final DataTAG Review, 24 March March 15, Networking testbed requirements  Multi-vendor  Unless a particular research group is specifically interested by the behaviour of TCP in the presence of out of order packets, running high performance TCP tests across a Juniper M160 backbone is pretty useless.  IPv6 achievable performance vary widely between different vendors  MPLS & QoS implementations also veary widely  Interoperability  Dynamic  Implies manpower & money  Partitionable  Reservation application  Reconfigurable  Avoid manual recabling, implies Electronic or Optical switch/patch panel  Extensible  Extensions to other networks  Implies collaboration  Not limited to network equipment, must also include high performance servers, high perf. Disks & NICs,  Coordination with other testbeds

Final DataTAG Review, 24 March March 15, Acknowledments  The project would not have accumulated so many successes without the active participation of our North American colleagues, in particular:  Caltech/DoE  University of Illinois/NSF  iVDGL  Starlight  Internet2/Abilene  Canarie  and our European sponsors and colleagues as well, in particular:  European Union’s IST program  Dante/GEANT  GARR  Surfnet  VTHD  The GNEW2004 workshop is yet another example of successful collaboration between Europe and USA