30 June 2004 1 Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

The DataTAG Project 25 March, Brussels FP6 Information Day Peter Clarke, University College London.
CGW03, Crakow, 28 October 2003 DataTAG Project Update CGW’2003 workshop, Crakow (Poland) October 28, 2003 Olivier Martin, CERN, Switzerland.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
16 September The ongoing evolution from Packet based networks to Hybrid Networks in Research & Education Networks Olivier Martin, CERN NEC’2005.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
31 October The ongoing evolution from Packet based networks to Hybrid Networks in Research & Education Networks Olivier Martin, CERN Swiss ICT Task.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
Challenges to address in the next future Apr 3, 2006 HEPiX Spring Meeting 2006 Enzo Valente, GARR and INFN.
J. Bunn, D. Nae, H. Newman, S. Ravot, X. Su, Y. Xia California Institute of Technology High speed WAN data transfers for science Session Recent Results.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
CA*net 4 International Grid Testbed Tel:
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Service Challenge Meeting - ISGC_2005, 26. Apr Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
HighSpeed TCP for High Bandwidth-Delay Product Networks Raj Kettimuthu.
23 January 2003Paolo Moroni (Slide 1) SWITCH-cc meeting DataTAG overview.
The Internet2 HENP Working Group Internet2 Spring Meeting May 8, 2002 Shawn McKee University of Michigan HENP Co-chair.
GNEW’2004 – 15/03/2004 DataTAG project Status & Perspectives Olivier MARTIN - CERN GNEW’2004 workshop 15 March 2004, CERN, Geneva.
Network Performance for ATLAS Real-Time Remote Computing Farm Study Alberta, CERN Cracow, Manchester, NBI MOTIVATION Several experiments, including ATLAS.
11-Feb-2004 IoP Half Day Meeting: Getting Ready For the Grid Peter Clarke SC2003 Video.
Network to and at CERN Getting ready for LHC networking Jean-Michel Jouanigot and Paolo Moroni CERN/IT/CS.
Networking Shawn McKee University of Michigan DOE/NSF Review November 29, 2001.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Olivier MartinThe BETEL Project 28/11/1997 Slide (1) BETEL (Broadband Exchange over Trans-European Links) u Presentation Outline: l Background l Partners.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract The main idea behind the DataTAG project was to strengthen the collaboration.
TCP transfers over high latency/bandwidth networks Internet2 Member Meeting HENP working group session April 9-11, 2003, Arlington T. Kelly, University.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
Networking Shawn McKee University of Michigan PCAP Review October 30, 2001.
Brookhaven Science Associates U.S. Department of Energy USATLAS Tier 1 & 2 Networking Meeting Scott Bradley Manager, Network Services 14 December 2005.
18/09/2002Presentation to Spirent1 Presentation to Spirent 18/09/2002.
October 2000GRNET GRNET Greek Research & Technology Network Tryfon K. Chiotis Dimitrios K. Kalogeras.
TCP transfers over high latency/bandwidth networks & Grid DT Measurements session PFLDnet February 3- 4, 2003 CERN, Geneva, Switzerland Sylvain Ravot
Data Processing and the LHC Computing Grid (LCG) Jamie Shiers Database Group, IT Division CERN, Geneva, Switzerland
1 Experiences and results from implementing the QBone Scavenger Les Cottrell – SLAC Presented at the CENIC meeting, San Diego, May
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
S. Ravot, J. Bunn, H. Newman, Y. Xia, D. Nae California Institute of Technology CHEP 2004 Network Session September 1, 2004 Breaking the 1 GByte/sec Barrier?
The EU DataTAG Project Richard Hughes-Jones Based on Olivier H. Martin GGF3 Frascati, Italy Oct 2001.
DataTAG overview. 3 February 2003Paolo Moroni (Slide 2) APM meeting - Barcelona Summary  Why DataTAG?  DataTAG project  Test-bed extensions  General.
Storage Management on the Grid Alasdair Earl University of Edinburgh.
Global Research & Education Networking - Lambda Networking, then Tera bps Kilnam Chon KAIST CRL Symposium.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
TESTBED (Technical)
Networking for grid Network capacity Network throughput
Enabling High Speed Data Transfer in High Energy Physics
CERN-USA connectivity update DataTAG project
LHC Collisions.
The DataTAG Project Olivier H. Martin
DataTAG Project update
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
Wide Area Networking at SLAC, Feb ‘03
Presented at the GGF3 conference 8th October Frascati, Italy
The EU DataTAG Project Olivier H. Martin CERN - IT Division
5th EU DataGrid Conference
Presentation at University of Twente, The Netherlands
LHC Computing Grid Project
Wide-Area Networking at SLAC
Internet2 Spring Member meeting
Presented at the 4th DataGrid Conference
High-Performance Data Transport for Grid Applications
LHC Computing Grid Project
Presentation transcript:

30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit

30 June UK-DTI visit 2 Presentation Outline  CERN’s connectivity to the Internet  DataTAG project overview  Wide Area Networking challenges  Where do we want to be by the start of the LHC in 2007?  Where are we now?

30 June 2004 Slide 3 CERN External Networking Main Internet Connections Gen. Purp. North American A&R Connectivity (combined with DataTAG) CERN ATRIUM VTHD / FR NetherLight DataTAG GEANT SWITCH 10Gbps 2.5Gbps 1Gbps 10Gbps 2.5Gbps 10Gbps Network Research CERN Internet Exchange Point Swiss National Research Network General Purpose European A&R connectivity USLIC CIXP

Final DataTAG Review, 24 March Project partners

Final DataTAG Review, 24 March DataTAG Mission  EU  US Grid network research  High Performance Transport protocols  Inter-domain QoS  Advance bandwidth reservation  EU  US Grid Interoperability  Sister project to EU DataGRID T rans A tlantic G rid

LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL: 200k SI95; 600 TB IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~ MBytes/sec 2.5/10 Gbps 0.1–1 Gbps Physicists work on analysis “channels” Each institute has ~10 physicists working on one or more channels Physics data cache ~PByte/sec 10 Gbps Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

30 June 2004 UK DTI visit Slide 7 grid for a physics study group Deploying the LHC Grid grid for a regional group Tier2 Lab a Uni a Lab c Uni n Lab m Lab b Uni b Uni y Uni x Tier3 physics department    Desktop Germany Tier 1 USA UK France Italy Taipei? CERN Tier 1 Japan The LHC Computing Centre CERN Tier 0

10 June 2004 UK-DTI visit Slide 8 Main Networking Challenges Fulfill the, yet unproven, assertion that the network can be « nearly » transparent to the Grid Deploy suitable Wide Area Network infrastructure ( Gb/s) Deploy suitable Local Area Network infrastructure (matching or exceeding that of the WAN) Seamless interconnection of LAN & WAN infrastructures firewall? End to End issues (transport protocols, PCs (Itanium, Xeon), 10GigE NICs (Intel, S2io) where are we today: memory to memory: 6.5Gb/s memory to disk: 1.2MB (Windows 2003 server/NewiSys) disk to disk: 400MB (Linux), 600MB (Windows)

10 June 2004 UK-DTI visit Slide 9 Main TCP issues Does not scale to some environments –High speed, high latency –Noisy Unfair behaviour with respect to: –Round Trip Time (RTT) –Frame size (MSS) –Access Bandwidth Widespread use of multiple streams in order to compensate for inherent TCP/IP limitations (e.g. Gridftp, BBftp): –Bandage rather than a cure New TCP/IP proposals in order to restore performance in single stream environments – Not clear if/when it will have a real impact – In the mean time there is an absolute requirement for backbones with: – Zero packet losses, – And no packet re-ordering

10 June 2004 UK-DTI visit Slide 10 TCP dynamics (10Gbps, 100ms RTT, 1500Bytes packets) Window size (W) = Bandwidth*Round Trip Time –Wbits = 10Gbps*100ms = 1Gb –Wpackets = 1Gb/(8*1500) = packets Standard Additive Increase Multiplicative Decrease (AIMD) mechanisms: –W=W/2 (halving the congestion window on loss event) –W=W + 1 (increasing congestion window by one packet every RTT) Time to recover from W/2 to W (congestion avoidance) at 1 packet per RTT: –RTT*Wp/2 = hour –In practice, 1 packet per 2 RTT because of delayed acks, i.e hour Packets per second: –RTT*Wpackets = 833’333 packets

10 June 2004 UK-DTI visit Slide 11 10G DataTAG testbed extension to Telecom World 2003 and Abilene/Cenic On September 15, 2003, the DataTAG project was the first transatlantic testbed offering direct 10GigE access using Juniper’s VPN layer2/10GigE emulation.

Final DataTAG Review, 24 March Internet2 land speed record history (IPv4 & IPv6) period Impact of a single multi- Gb/s flow on the Abilene backbone

Final DataTAG Review, 24 March Internet2 land speed record history (IPv4 & IPv6) period