ISGC_2005, 27. Apr. 2005 Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH Institute for Scientific.

Slides:



Advertisements
Similar presentations
TAB, 03. March 2006 Bruno Hoeft German LHC – WAN present state – future concept Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O.
Advertisements

GridKa May 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Installing dCache into an existing Storage environment at GridKa Forschungszentrum.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft LCG-POB, , Reinhard Maschuw1 Grid Computing Centre Karlsruhe - GridKa Regional/Tier.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
T0/T1 – Network Meeting Bruno Hoeft T0/T1 – GridKa/CERN Network Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box.
LCG Grid Deployment Board, March 2003 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Status of GridKa for LCG-1 Forschungszentrum Karlsruhe.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
02 nd April 03Networkshop Managed Bandwidth Next Generation F. Saka UCL NETSYS (NETwork SYStems centre of excellence)
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
1 The D0 NIKHEF Farm Kors Bos Ton Damen Willem van Leeuwen Fermilab, May
Service Challenge Meeting - ISGC_2005, 26. Apr Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
Network Performance for ATLAS Real-Time Remote Computing Farm Study Alberta, CERN Cracow, Manchester, NBI MOTIVATION Several experiments, including ATLAS.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Site Report --- Andrzej Olszewski CYFRONET, Kraków, Poland WLCG GridKa+T2s Workshop.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
GridKa December 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Doris Ressmann dCache Implementation at FZK Forschungszentrum Karlsruhe.
Final EU Review - 24/03/2004 DataTAG is a project funded by the European Commission under contract IST Richard Hughes-Jones The University of.
Service Challenge Meeting “Review of Service Challenge 1” James Casey, IT-GD, CERN RAL, 26 January 2005.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Service Challenge, 2. Dec Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Bruno Hoeft Progress at GridKa Forschungszentrum Karlsruhe GmbH.
CCIN2P3 Network November 2007 CMS visit to Tier1 CCIN2P3.
KOLKATA Grid Kolkata Tier-2 Status and Plan Site Name :- IN-DAE-VECC-02 Gocdb Name:- IN-DAE-VECC-02 VO :- ALICE City:- KOLKATA Country :-
Network evolving 2020 and beyond
LHCOPN lambda and fibre routing Episode 4 (the path to resilience…)
DE-KIT(GridKa) -- LHCONE
Measurement team Hans Ludwing Reyes Chávez Network Operation Center
The Beijing Tier 2: status and plans
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Status Report on LHC_2 : ATLAS computing
NL Service Challenge Plans
LCG France Network Infrastructures
Service Challenge 3 CERN
The transfer performance of iRODS between CC-IN2P3 and KEK
Networking between China and Europe
The INFN TIER1 Regional Centre
openLab Technical Manager
Southwest Tier 2.
Tier-1 + associated Tier-2s
Christof Hanke, HEPIX Spring Meeting 2008, CERN
Enabling High Speed Data Transfer in High Energy Physics
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
ESnet Network Measurements ESCC Feb Joe Metzger
TCP Performance over a 2.5 Gbit/s Transatlantic Circuit
MB-NG Review High Performance Network Demonstration 21 April 2004
LCG Service Challenges Overview
LHC Data Analysis using a worldwide computing grid
Wide-Area Networking at SLAC
GridTorrent Framework: A High-performance Data Transfer and Data Sharing Framework for Scientific Computing.
Presentation transcript:

ISGC_2005, 27. Apr Bruno Hoeft 10Gbit between GridKa and openlab (and their obstacles) Forschungszentrum Karlsruhe GmbH Institute for Scientific Computing P.O. Box 3640 D Karlsruhe, Germany Bruno Hoeft

ISGC_2005, 27. Apr Bruno Hoeft Outline LAN of GridKa –Strucure –Prevision of installation in 2008 WAN –History (1G/2003) –Current 10G –Testbed –Challenges crossing multi NREN (National Research and Education Network) –Quality and quantity evaluation of GridKa and openlap network connection –File transfer

ISGC_2005, 27. Apr Bruno Hoeft IWR 441,442 Tape Storage Main building

ISGC_2005, 27. Apr Bruno Hoeft Projects at GridKa Atlas (SLAC, USA) (CERN) (FermiLab,USA) calculating jobs with “real” data LHC Experimentenon-LHC Experimente 1,5 Mio. Jobs and 4,2 Mio. hours calculation in 2004

ISGC_2005, 27. Apr Bruno Hoeft CMS ATLAS LHCb CERN Tier 0 Centre at CERN working groups virtual organizations Germany (FZK) Tier 1 USA (Fermi, BNL) UK (RAL) France (IN2P3) Italy (CNAF) ………. CERN Tier 1 ………. Tier 0 The LHC multi-Tier Computing Model Tier 3 (Institute computer) Tier 4 ( Desktop)      Grid Computing Centre Karlsruhe Tier 2 (Uni-CCs, Lab-CCs) Lab y Uni a Lab i Uni b Lab z Lab x Uni c Uni d Uni e LHC Computing Grid Project - LCG 

ISGC_2005, 27. Apr Bruno Hoeft DSL has 1024 kbit/s. You can transfer128 kB in a second approx. 40 MB in 5 minutes 1 GBin 2 hours 1 TBin 90 days 1 PB in 248 years 1 PB is the content of 1,4 Million CDs, giving a tower of 1,4 km height ! Can you imagine what a PetaByte is ? PetaByte

ISGC_2005, 27. Apr Bruno Hoeft gradual extention of GridKa resources Apr 2004Okt 2004Apr 2005 % of 2008 Processors % Computing power / kSI2k % Disk [TB] % Tape [TB] % Internet [Gb/s]210* % April 2005: biggest Linux-Cluster within the German Science Society largest Online-Storage at a single installation in Germany strongest Internet connection in Germany available at the Grid with over 100 installations in Europe * Internet connetion for sc

ISGC_2005, 27. Apr Bruno Hoeft router private network … … C ompute N ode … Ethernet 320 Mbit Ethernet 100 Mbit Ethernet 1 Gbit FiberChannel 2Gbit Ethernet 10 Gbit FS … … FS NAS FS SAN … … switch Network installation Storage direct internet access 1 Gbit/s GridKa PIX L ogin S erver … DFN L ogin S erver router sc nodes … SAN 10Gbit/s s ervice c hallenge only

ISGC_2005, 27. Apr Bruno Hoeft router 1Gbit/s GridKa PIX L ogin S erver … FS … … private network Storage direct internet access 1 Gbit/s … FS NAS FS DFN … … C ompute N ode … … L ogin S erver switch router Network installation SAN Master Controller RM -Ganglia -Nagios -Cacti switch incl. Management Network Ethernet 320 Mbit Ethernet 100 Mbit Ethernet 1 Gbit FiberChannel 2Gbit Ethernet 10 Gbit 10Gbit/s s ervice c hallenge only sc nodes … SAN

ISGC_2005, 27. Apr Bruno Hoeft GridKa – WAN connectivity planning Mbps 155 Mbps 2 Gbps 10 Gbps 20 Gbps ? Lightpath to CERN start discussions with DFN/Dante ! - 10G internet connection since Oct mid to end of 2005  production - X-Win partner of DFN - light path between CERN and GridKa via X-Win and Geant2

ISGC_2005, 27. Apr Bruno Hoeft CERN GridFTP server GridFTP server WAN connection and Gigabit test with CERN Géant 10 Gbps DFN 2.4 Gbps GridFTP tested over 1 Gbps Karlsruhe Frankfurt 2x 1 Gbps 98% of 1Gbit Geneva

ISGC_2005, 27. Apr Bruno Hoeft CERN GridFTP server GridFTP server WAN connection 10Gigabit test with CERN Géant 10 Gbps DFN 10 Gbps Karlsruhe Frankfurt Geneva ar-karlsruhe1-ge g-win.dfn.de r-internet.fzk.de cr-karlsruhe1-po12-0.g-win.dfn.de dfn.de1.de.geant.net de.it1.it.geant.net it.ch1.ch.geant.net swiCE2-P6-1.switch.ch swiCE3-G4-3.switch.ch 10 Gbps

ISGC_2005, 27. Apr Bruno Hoeft Géant

ISGC_2005, 27. Apr Bruno Hoeft CERN GridFTP server GridFTP server Géant 10 Gbps DFN 10 Gbps Karlsruhe Frankfurt ar-karlsruhe1-ge g-win.dfn.de r-internet.fzk.de cr-karlsruhe1-po12-0.g-win.dfn.de dfn.de1.de.geant.net swiCE3-G4-3.switch.ch de.fr1.fr.geant.net fr.ch1.ch.geant.net MPLS tunnel Geneva WAN connection 10Gigabit test with CERN

ISGC_2005, 27. Apr Bruno Hoeft CERN GridFTP server GridFTP server Géant 10 Gbps DFN 10 Gbps Karlsruhe Frankfurt -Bandwidth evaluation (tcp/udp) - MPLS via France (MPLS - MultiProtocol Label Switching) - LBE (Least Best Effort) - GridFTP server pool HD to HD Storage to Storage - SRM Geneva WAN connection 10Gigabit test with CERN

ISGC_2005, 27. Apr Bruno Hoeft Hardware Various Xeon dual 2.8 and 3.0 GHz IBM x-series (Intel and Broadcom NIC) Recently added 3.0 GHz EM64T (800 FSB) Cisco 6509 with 4 10 Gb ports and lots of 1 Gb Storage:Datadirect 2A8500 with 16 TB Linux RH ES 3.0 (U2 and U3), GPFS 10 GE Link to GEANT via DFN (least best effort  ) TCP/IP stack 4 MB buffer 2 MB window size DFN 10Gbit/s s ervice c hallenge only sc nodes … SAN router

ISGC_2005, 27. Apr Bruno Hoeft Quality Evaluation (UDP streams) 10gkt113 10gkt111 oplapro Mbit jitter: 0,028; ooo: 0 jitter: 0,021; ooo: Mbit LAN ooo - out of order W A N [10gtk111] iperf -s –u [10gtk113] iperf -c u -b 1000M Transfer 957 Mbits/sec [10gtk113] iperf -s –u [10gtk111] iperf -c u -b 1000M Transfer 957 Mbits/sec LAN WAN 953 Mbit; jitter: 0,018 ms; ooo: 215 – 0,008% 884 Mbit; jitter: 0,015 ms; ooo: 4239 – 0,018% [oplapro73]> iperf -s -u [10gtk113] iperf -c u -b 1000M Transfer 953 Mbits/sec [10gtk113] iperf -s -u [oplapro73]> iperf -c u -b 1000M Transfer 884 Mbits/sec. Reno scalable symmetric = 957 Mbit/sec = 953 Mbit/sec 884 Mbit/sec asymmetric

ISGC_2005, 27. Apr Bruno Hoeft Bandwidth evaluation (TCP streams) W A N 348Mbit/sec – single stream W A N Reno 10gkt101 – 10gtk105 Reno oplapro71 | Oplapro75 iperf five single steams

ISGC_2005, 27. Apr Bruno Hoeft 10gkt10[1-5] W A N 5 nodes a 112MByte/sec – 24 parrallel threads Reno iperf Oplapro7[1-5] Reno Bandwidth evaluation time

ISGC_2005, 27. Apr Bruno Hoeft Wethermap of Géant

ISGC_2005, 27. Apr Bruno Hoeft Evaluation of max throughput Mbit/s 18:00 20:00 9 nodes each site -8 * 845 Mbit -1 * 540 Mbit higher speed at one stream results in packet loss time

ISGC_2005, 27. Apr Bruno Hoeft Gridftp sc1 throughput Sc1 – 500MByte sustained 4 th Feb. 056 th Feb. 058 th Feb. 057 th Feb th Feb Mbit/s to /dev/null to hd/SAN 19 Nodes - 15 WorkerNodes* 20MByte IDE/ATA HD - 1 FileServer * 50MByte SCSI HD - 3 FileServer* 50MByte SAN

ISGC_2005, 27. Apr Bruno Hoeft SC2 approx 1/5 of the load

ISGC_2005, 27. Apr Bruno Hoeft SC2 five nodes at GridKa gridftp to gpfs

ISGC_2005, 27. Apr Bruno Hoeft Conclusion GridKa Tier-1 Centre for LHC 4 Production and 4 LHC Experiments multi NREN 10Gbit link up to 7.0Mbit usable Service Challenge to test the whole chain from tape over WAN to tape Diffrent steps just finished SC2 with approx. 1/5 of the CERN aggregated load Lightpath via X-Win (DFN) / Geant2 to CERN will be realised in 2006

ISGC_2005, 27. Apr Bruno Hoeft Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft