CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t Xrootd through WAN… Can I? Now my WAN is well tuned! So what? Now my WAN is well tuned.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
CERN IT Department CH-1211 Geneva 23 Switzerland t Data & Storage Services Technical student CERN IT-DSS-FDO University of Vigo WCSFSS 2014.
Introduction to Unix – CS 21 Lecture 10. Lecture Overview Midterm questions Jobs and processes description The foreground and background Controlling jobs.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 8: Implementing and Managing Printers.
CERN IT Department CH-1211 Genève 23 Switzerland t Messaging System for the Grid as a core component of the monitoring infrastructure for.
CERN - IT Department CH-1211 Genève 23 Switzerland t SVN Pilot: CVS Replacement Manuel Guijarro Jonatan Hugo Hugosson Artur Wiecek David.
Installing Active Directory on Windows Server 2008 R2 Installing Active Directory on a fresh Windows Server 2008 R2 machine in a home network. These instructions.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
CERN IT Department CH-1211 Genève 23 Switzerland t Some Hints for “Best Practice” Regarding VO Boxes Running Critical Services and Real Use-cases.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
CERN IT Department CH-1211 Genève 23 Switzerland t XROOTD news Status and strategic directions ALICE-GridKa operations meeting 03 July 2009.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
W2000 at Saclay Joël Surget CEA/Saclay DAPNIA/SEI.
Introduction Optimizing Application Performance with Pinpoint Accuracy What every IT Executive, Administrator & Developer Needs to Know.
Neng XU University of Wisconsin-Madison X D.  This instruction is for beginners to setup and test an Xrootd/PROOF pool quickly.  Following up each step.
Installing Active Directory on Windows Server 2008 R2 Installing Active Directory on a fresh Windows Server 2008 R2 machine in a home network. The guide.
ALICE data access WLCG data WG revival 4 October 2013.
Operating Systems & Infrastructure Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS OIS Feedback on Module Responsibilities.
CERN IT Department CH-1211 Genève 23 Switzerland t Experience with Windows Vista at CERN Rafal Otto Internet Services Group IT Department.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS From data management to storage services to the next challenges.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES P. Saiz (IT-ES) AliEn job agents.
XROOTD Tutorial Part 2 – the bundles Fabrizio Furano.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
CERN IT Department CH-1211 Genève 23 Switzerland t Castor development status Alberto Pace LCG-LHCC Referees Meeting, May 5 th, 2008 DRAFT.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Overlook of Messaging.
Parallel TCP Bill Allcock Argonne National Laboratory.
CERN IT Department CH-1211 Genève 23 Switzerland t Monitoring: Tracking your tasks with Task Monitoring PAT eLearning – Module 11 Edward.
CERN IT Department CH-1211 Genève 23 Switzerland t Xrootd setup An introduction/tutorial for ALICE sysadmins.
Optimisation of Grid Enabled Storage at Small Sites Jamie K. Ferguson University of Glasgow – Jamie K. Ferguson – University.
CERN IT Department CH-1211 Genève 23 Switzerland PES SVN User Forum David Asbury Alvaro Gonzalez Alvarez Pawel Kacper Zembrzuski 16 April.
CERN IT Department CH-1211 Genève 23 Switzerland t The Agile Infrastructure Project Part 1: Configuration Management Tim Bell Gavin McCance.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
Stephen Burke – Data Management - 3/9/02 Partner Logo Data Management Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF.
CERN IT Department CH-1211 Genève 23 Switzerland t Scalla/xrootd WAN globalization tools: where we are. Now my WAN is well tuned! So what?
CERN – Alice Offline – Thu, 20 Mar 2008 – Marco MEONI - 1 Status of Cosmic Reconstruction Offline weekly meeting.
CERN IT Department CH-1211 Genève 23 Switzerland t DSS Data Access in the HEP community Getting performance with extreme HEP data distribution.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Priorities update Andrea Sciabà IT/GS Ulrich Schwickerath IT/FIO.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
CSC 322 Operating Systems Concepts Lecture - 7: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
CERN IT Department CH-1211 Genève 23 Switzerland t Unified communications: Lync as your desk phone Fernando Moreno Pascual – IT/OIS.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Slide 1/29 Informed Prefetching in ROOT Leandro Franco 23 June 2006 ROOT Team Meeting CERN.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS XROOTD news New release New features.
CERN IT Department CH-1211 Genève 23 Switzerland t ALICE XROOTD news New xrootd bundle release Fixes and caveats A few nice-to-know-better.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Juraj Sucik, Michal Kwiatek, Rafal.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES L. Betev, A. Grigoras, C. Grigoras, P. Saiz, S. Schreiner AliEn.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS Data architecture challenges for CERN and the High Energy.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Analysis efficiency Andrei Gheata ALICE offline week 03 October 2012.
1 R. Voicu 1, I. Legrand 1, H. Newman 1 2 C.Grigoras 1 California Institute of Technology 2 CERN CHEP 2010 Taipei, October 21 st, 2010 End to End Storage.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
CERN IT Department CH-1211 Genève 23 Switzerland t Large DBs on the GRID Getting performance with wild HEP data distribution.
CERN IT Department CH-1211 Genève 23 Switzerland t CMS SAM Testing Andrea Sciabà Grid Deployment Board May 14, 2008.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
CERN IT Department CH-1211 Genève 23 Switzerland t Xrootd LHC An up-to-date technical survey about xrootd- based storage solutions.
1 Function and Benefit. 2 - Saving Cost - Sharing Resource - Simple Management - Simple Structure - Etc …
Getting the Most out of Scientific Computing Resources
Dynamic Extension of the INFN Tier-1 on external resources
Getting the Most out of Scientific Computing Resources
Data Access in HEP A tech overview
Storage elements discovery
Data access performance through parallelization and vectored access
Shells, Help, and Paths.
Outline Chapter 2 (cont) OS Design OS structure
System calls….. C-program->POSIX call
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t Xrootd through WAN… Can I? Now my WAN is well tuned! So what? Now my WAN is well tuned. So what? (Xrootd advancements on the “Long, fat pipe problem”)

CERN IT Department CH-1211 Genève 23 Switzerland t Step 1: Tune it ! Costin’s recipe can boost the efficiency of WANs –More precisely, it makes its usage much less inefficient By giving the correct TCP parameters to the kernel Need to fix client and server machines –Everybody will be happier But then?

CERN IT Department CH-1211 Genève 23 Switzerland t WANs are difficult In WANs each client/server response comes much later –E.g. 180ms later With well tuned WANs one needs apps built with WANs in mind –Otherwise they are walls impossible to climb I.e. VERY bad performance –Bulk xfer apps are OK (gridftp, xrdcp, fdt, etc.) –There are more interesting use cases, and much more benefit to get XROOTD and ROOT are OK

CERN IT Department CH-1211 Genève 23 Switzerland t A simple use case I am a physicist, waiting for the results of my analysis jobs –Many bunches, several outputs Will be saved e.g. to ALICE::CERN::SE –My laptop is configured to show histograms etc, with ROOT –I leave for a conference, the jobs finish while in the plane –When there, I want to simply draw the results from my Alien home directory –When there, I want to save my new histos in the same place –I have no time to loose in tweaking to get a copy of everything. I loose copies into the confusion. I know nothing about parameters to tweak. What can I expect? Can I do it? I know nothing about parameters to tweak. What can I expect? Can I do it?

CERN IT Department CH-1211 Genève 23 Switzerland t Another use case ALICE analysis on the GRID Each job reads ~150MB from ALICE::CERN::SE Difficult to put location-dependent tweaks in jobs It would be nice to speed it up –At 5MB/s it takes 20secs –At 1MB/s it takes 100 Sometimes data are accessed elsewhere –It would be nice if it was more efficient Better usage of resources, more processed jobs/day After all, ROOT/ALIROOT is not able to r/w data at more than 20MB/s with 100% usage of 1 core This fits perfectly with the current WAN status

CERN IT Department CH-1211 Genève 23 Switzerland t Up to now Up to now it was possible –But it needed a tweak to enable/disable the WAN mode of XrdClient So, difficult to automatize. Very technical, hence nobody cares! Now the things are much better –The good old WAN mode is OK for bulk xfers –The new improvements use the possibilities of the newer kernels and TCP stacks –Interactive things should need nothing So, if you have: –The new client (to appear in ROOT next week) –A new server (available through xrd-installer) With the new fixed configuration! –You can expect a good improvement over the past Without doing nothing special, no tweaks

CERN IT Department CH-1211 Genève 23 Switzerland t Heavy drawing/reading 180ms RTT (Caltech machinery) Very well tuned kernels

CERN IT Department CH-1211 Genève 23 Switzerland t Writing and overheads 180ms RTT (Caltech machinery) Very well tuned kernels

CERN IT Department CH-1211 Genève 23 Switzerland t Conclusion Things look very good –BTW same order of magnitude than a local RAID disk (and who has a RAID in the laptop?) –Writing gets really a boost Aren’t job outputs written that way sometimes? Even with Tfile::Cp And they are on by default, but they need: –Well configured client and server machines –Xrootd client and server (the newest) –An app for which it makes sense, e.g. ROOT with well designed data Alien