ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.

Slides:



Advertisements
Similar presentations
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Advertisements

Wahid Bhimji Andy Washbrook And others including ECDF systems team Not a comprehensive update but what ever occurred to me yesterday.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Israel Cluster Structure. Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES News on monitoring for CMS distributed computing operations Andrea.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
Site report: Tokyo Tomoaki Nakamura ICEPP, The University of Tokyo 2014/12/10Tomoaki Nakamura1.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
S. Gadomski, "The ATLAS cluster in Geneva", Swiss WLCG experts workshop, CSCS, June ATLAS cluster in Geneva Szymon Gadomski University of Geneva.
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Astronomy System Managers Report Barry Smalley Keele University Report to CNAP 4 November 2003.
Site Report BEIJING-LCG2 Wenjing Wu (IHEP) 2010/11/21.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
DYNES Storage Infrastructure Artur Barczyk California Institute of Technology LHCOPN Meeting Geneva, October 07, 2010.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
File sharing requirements of remote users G. Bagliesi INFN - Pisa EP Forum on File Sharing 18/6/2001.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
2-Dec Offline Report Matthias Schröder Topics: Scientific Linux Fatmen Monte Carlo Production.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
Site Report: Prague Jiří Chudoba Institute of Physics, Prague WLCG GridKa+T2s Workshop.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
T3g software services Outline of the T3g Components R. Yoshida (ANL)
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Belle II Physics Analysis Center at TIFR
Update on Plan for KISTI-GSDC
Artem Trunov and EKP team EPK – Uni Karlsruhe
Simulation use cases for T2 in ALICE
US CMS Testbed.
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
Grid Canada Testbed using HEP applications
The LHCb Computing Data Challenge DC06
Presentation transcript:

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3 cluster what is it used for recent issues and long-term concerns

ATLAS computing in Geneva 268 CPU cores 180 TB for data –70 in a Storage Element special features: –direct line to CERN at 10 Gb/s –latest software via CERN AFS –SE in Tiers of ATLAS since Summer 2009 –FTS channels from CERN and from NDGF Tier 1 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 092

Networks and systems S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 093

S. Gadomski, "Status and plans of the T3 in Geneva", Swiss ATLAS Grid Working Group, 7 Jan Setup and use 1.Our local cluster –log in and have an environment to work with ATLAS software, both offline and trigger develop code, compile, interact with ATLAS software repository at CERN –work with nightly releases of ATLAS software, normally not distributed off-site but visible on /afs –disk space, visible as normal linux file systems –use of final analysis tools, in particular ROOT –a easy way to run batch jobs 2.A grid site –tools to transfer data from CERN as well as from and to other Grid sites worldwide –a way for ATLAS colleagues, Swiss and other, to submit jobs to us –ways to submit our jobs to other grid sites ~55 active users, 75 accounts, ~90 including old not only Uni GE; an official Trigger development site

Statistics of batch jobs S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 095 NorduGrid production since 2005 ATLAS never sleeps local jobs taking over in recent months

Added value by resource sharing S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 096 local jobs come in peaks grid always has jobs little idle time, a lot of Monte Carlo done

Some performance numbers Storage systemdirectionmax rate [MB/s] NFSwrite250 NFSread370 DPM Storage Elementwrite4*250 DPM Storage Elementread4*270 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 097 Internal to the Cluster the data rates are OK Source/methodMB/sGB/day dq2-get average dq2-get max FTS from CERN (per file server)10 to – 5000 FTS from NDGF-T1 (per file server)3 – 5250 – 420 Transfers to Geneva

Test of larger TCP buffers S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 098 transfer from fts001.nsc.liu.se network latency 36 ms (CERN at 1.3 ms) increasing TCP buffer sizes Fri Sept 11 th (Solaris default 48 kB) 192 kB 1 MB Why? ~25 MB/s Data rate per server Can we keep the FTS transfer at 25 MB/s/server?

Issues and concerns recent issues –one crash of a Solaris file server in the DPM SE –two latest Solaris file servers with slow disk I/O, deteriorating over time, fixed by reboot –unreliable data transfers –frequent security updates of the SLC4 –migration to SLC5, Athena reading from DPM long term concerns –level of effort to keep it all up –support of the Storage Element S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 099

Summary and outlook A large ATLAS T3 in Geneva Special site for Trigger development In NorduGrid since 2005 DPM Storage Element since July 2009 –FTS from CERN and from the NDGF-T1 –exercising data transfers Short-term to do list –gradual move to SLC5 –write a note, including performance results Towards a steady–state operation! S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 0910