Duke Atlas Tier 3 Site Doug Benjamin (Duke University)

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
CVMFS AT TIER2S Sarah Williams Indiana University.
Tier 3g Infrastructure Doug Benjamin Duke University.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
RAL PPD Site Update and other odds and ends Chris Brew.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Computing for HEP in the Czech Republic Jiří Chudoba Institute of Physics, AS CR, Prague.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
22nd March 2000HEPSYSMAN Oxford Particle Physics Site Report Pete Gronbech Systems Manager.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
ATLAS Great Lakes Tier-2 (AGL-Tier2) Shawn McKee (for the AGL Tier2) University of Michigan US ATLAS Tier-2 Meeting at Harvard Boston, MA, August 17 th,
CDF computing in the GRID framework in Santander
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Computational Research in the Battelle Center for Mathmatical medicine.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
US Atlas Tier 3 Overview Doug Benjamin Duke University.
Atlas Software Structure Complicated system maintained at CERN – Framework for Monte Carlo and real data (Athena) MC data generation, simulation and reconstruction.
Scientific Computing in PPD and other odds and ends Chris Brew.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Cloud computing and federated storage Doug Benjamin Duke University.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
LTU Site Report Dick Greenwood LTU Site Report Dick Greenwood Louisiana Tech University DOSAR II Workshop at UT-Arlington March 30-31, 2005.
Atlas Tier 3 Overview Doug Benjamin Duke University.
The Beijing Tier 2: status and plans
Tier3(g,w) meeting at ANL ASC
NL Service Challenge Plans
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Installing and Running a CMS T3 Using OSG Software - UCR
SAM at CCIN2P3 configuration issues
Simulation use cases for T2 in ALICE
PK-CIIT Grid Operations in Pakistan
Presentation transcript:

Duke Atlas Tier 3 Site Doug Benjamin (Duke University)

March 4, 2008Doug Benjamin2 Duke Atlas Group Personnel 4 Tenured Faculty 1 Research Professor, 2 Research Scientists 2 PostDocs (located at CERN) 2 PostDocs (who will eventually transition from CDF at the Tevatron to Atlas) Several students

March 4, 2008Doug Benjamin3 Duke Tier 3 site (current HW configuration ) 2 TB Fileserver (Centos bit OS) Data and user home areas exported to other nodes in cluster 1 condor head node (for future expansion) 3 interactive nodes 2 dual core AMD, SL4.4 (32bit) Ram 4 GB 500 GB local storage one node interactive node doubles as Atlas code server 2 nodes at CERN (SLC 4.5 (32 bit), 1 dual CPU, 2 GB RAM, GB local storage each

March 4, 2008Doug Benjamin4 Duke Tier 3 Software/Support Personnel Software installed Several recent Atlas releases from kits (12.0.6, and ) And the occasionally nightly release from kits as needed for TRT code development Condor (not used at moment), OSG client Support Personnel Duke Physics department system managers maintain OS and interactive user accounts Duke HEP Research scientist (me) Installs/maintains Atlas, OSG and Condor software ~ 0.25 FTE available for these activities Implies the code must be simple to install, debug and operate

March 4, 2008Doug Benjamin5 Upgrade plans Near term (within 6 months) Additional computing core machines, 2 GB/core Additional Disk - up to 10 TB standalone fileservers Longer term (minimal survival mode) ( next year or so depending on delivered Luminosity) a 8 core machine per group member (~10 machines) ~ 25 TB disk storage

March 4, 2008Doug Benjamin6 Our current analysis model Based on a small Tier 3 Heavily depends on outside computing Fraction of data copied to local storage Test programs with local machines Send to grid - larger scale jobs Resulting histogram files returned to local machines Disadvantage - Very reliant on availability on outside CPU and storage at the Tier1 and Tier2 sites

March 4, 2008Doug Benjamin7 Upgrade Plan (ideal model) ~25 8 core machines ~ 100 TB disk space Analysis model Copy most of DPD data locally Analyze locally except for the very large jobs Would allow us quickly analyze the data as it is available

March 4, 2008Doug Benjamin8 Comparison w/ Duke CDF Duke HEP has the equivalent of an Atlas Tier 3 at FNAL 26 cores on 10 machines (40% are desktops) 14 cores are recently purchased (< 1 year) on 4 machines 15.4 TB Raid 5/6 on 6 file servers + local storage on compute nodes Make extensive use of collaboration wide CDF resources > 50% all CDF computing is made available to general users for their own analysis uses ( vital since an institution Duke’s size can not afford to have a large farm - our computing budget is too small) For Integrated Luminosity of ~ 2.4 fb -1 High Pt e’s - Data: 122 M evts volume - 19 TB 158 kB/evt analysis ntuple: 122 M evts volume - 6 TB 52 kB/evt MC W(enu)/[Z(ee)] 36 [23] M evts - 6 [3.9] TB 163[167] kB/evt associated ntuples: 28 [20] M evts - 2.1[1.6] TB 82[52] kB/evt

March 4, 2008Doug Benjamin9 Challenges Interaction with Duke Computing infrastructure Campus Firewalls open ports normally closed - slow process Campus networking “Last mile problem” - Duke U is on Internet 2 and Lambda Light Rail - Physics department network is not. Funding for Computing In past 2 years - received no funds from DOE for computing Any computing came from small grant the University Need to find additional resources if we are to grow to the size we envision we need to be effective US ATLAS should make it very clear to funding agencies the working model for tier 3 sites and available user support (CPU - storage) at Tier 2 sites

March 4, 2008Doug Benjamin10 Conclusions Duke Tier 3 could be an example of a small tier site being designed to provide enough resources to allow group members to analyze the Atlas data effectively Without sufficient funding to Tier 3 sites, it implicitly assumes that sufficient resources will be made available to the general users at the Tier1 and Tier 2 sites.