Download presentation
Presentation is loading. Please wait.
Published byJamil Mound Modified over 9 years ago
1
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3 cluster what is it used for recent issues and long-term concerns
2
ATLAS computing in Geneva 268 CPU cores 180 TB for data –70 in a Storage Element special features: –direct line to CERN at 10 Gb/s –latest software via CERN AFS –SE in Tiers of ATLAS since Summer 2009 –FTS channels from CERN and from NDGF Tier 1 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 092
3
Networks and systems S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 093
4
S. Gadomski, "Status and plans of the T3 in Geneva", Swiss ATLAS Grid Working Group, 7 Jan 20084 Setup and use 1.Our local cluster –log in and have an environment to work with ATLAS software, both offline and trigger develop code, compile, interact with ATLAS software repository at CERN –work with nightly releases of ATLAS software, normally not distributed off-site but visible on /afs –disk space, visible as normal linux file systems –use of final analysis tools, in particular ROOT –a easy way to run batch jobs 2.A grid site –tools to transfer data from CERN as well as from and to other Grid sites worldwide –a way for ATLAS colleagues, Swiss and other, to submit jobs to us –ways to submit our jobs to other grid sites ~55 active users, 75 accounts, ~90 including old not only Uni GE; an official Trigger development site
5
Statistics of batch jobs S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 095 NorduGrid production since 2005 ATLAS never sleeps local jobs taking over in recent months
6
Added value by resource sharing S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 096 local jobs come in peaks grid always has jobs little idle time, a lot of Monte Carlo done
7
Some performance numbers Storage systemdirectionmax rate [MB/s] NFSwrite250 NFSread370 DPM Storage Elementwrite4*250 DPM Storage Elementread4*270 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 097 Internal to the Cluster the data rates are OK Source/methodMB/sGB/day dq2-get average6.6560 dq2-get max585000 FTS from CERN (per file server)10 to 60840 – 5000 FTS from NDGF-T1 (per file server)3 – 5250 – 420 Transfers to Geneva
8
Test of larger TCP buffers S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 098 transfer from fts001.nsc.liu.se network latency 36 ms (CERN at 1.3 ms) increasing TCP buffer sizes Fri Sept 11 th (Solaris default 48 kB) 192 kB 1 MB Why? ~25 MB/s Data rate per server Can we keep the FTS transfer at 25 MB/s/server?
9
Issues and concerns recent issues –one crash of a Solaris file server in the DPM SE –two latest Solaris file servers with slow disk I/O, deteriorating over time, fixed by reboot –unreliable data transfers –frequent security updates of the SLC4 –migration to SLC5, Athena reading from DPM long term concerns –level of effort to keep it all up –support of the Storage Element S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 099
10
Summary and outlook A large ATLAS T3 in Geneva Special site for Trigger development In NorduGrid since 2005 DPM Storage Element since July 2009 –FTS from CERN and from the NDGF-T1 –exercising data transfers Short-term to do list –gradual move to SLC5 –write a note, including performance results Towards a steady–state operation! S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 0910
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.