Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.

Slides:



Advertisements
Similar presentations
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
US ATLAS T2/T3 Workshop at UChicagoAugust 19, Profiling Analysis at Startup Charge from Rob & Michael Best guess as to the loads analysis will place.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
GSIAF "CAF" experience at GSI Kilian Schwarz. GSIAF Present status Present status installation and configuration installation and configuration usage.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
Site Report US CMS T2 Workshop Samir Cury on behalf of T2_BR_UERJ Team.
CVMFS AT TIER2S Sarah Williams Indiana University.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
LT 2 London Tier2 Status Olivier van der Aa LT2 Team M. Aggarwal, D. Colling, A. Fage, S. George, K. Georgiou, W. Hay, P. Kyberd, A. Martin, G. Mazza,
Grid Developers’ use of FermiCloud (to be integrated with master slides)
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Tier 3 Computing Doug Benjamin Duke University. Tier 3’s live here Atlas plans for us to do our analysis work here Much of the work gets done here.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Atlas Tier 3 Virtualization Project Doug Benjamin Duke University.
Brookhaven Analysis Facility Michael Ernst Brookhaven National Laboratory U.S. ATLAS Facility Meeting University of Chicago, Chicago 19 – 20 August, 2009.
Tier 3 architecture Doug Benjamin Duke University.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
PROOF in Atlas Tier 3 model Sergey Panitkin 1 BNL.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
University user perspectives of the ideal computing environment and SLAC’s role Bill Lockman Outline: View of the ideal computing environment ATLAS Computing.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Final Implementation of a High Performance Computing Cluster at Florida Tech P. FORD, X. FAVE, K. GNANVO, R. HOCH, M. HOHLMANN, D. MITRA Physics and Space.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
ANALYSIS TOOLS FOR THE LHC EXPERIMENTS Dietrich Liko / CERN IT.
US Atlas Tier 3 Overview Doug Benjamin Duke University.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Cloud computing and federated storage Doug Benjamin Duke University.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
ATLAS Tier3s Santiago González de la Hoz IFIC-Valencia S. González de la Hoz, II PCI 2009 Workshop, Valencia, 18/11/2010 ATLAS Tier3s (Programa de Colaboración.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Atlas Tier 3 Overview Doug Benjamin Duke University.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
ALICE & Clouds GDB Meeting 15/01/2013
Doug Benjamin Duke University On Behalf of the Atlas Collaboration
Virtualization and Clouds ATLAS position
Virtualisation for NA49/NA61
Dag Toppe Larsen UiB/CERN CERN,
Belle II Physics Analysis Center at TIFR
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
Heterogeneous Computation Team HybriLIT
A full demonstration based on a “real” analysis scenario
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
PROOF in Atlas Tier 3 model
Virtualisation for NA49/NA61
Presentation transcript:

Doug Benjamin Duke University

2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production (T0) - remade periodically on T1 Produced outside official production on T2 and/or T3 (by group, sub-group, or Univ. group) Streamed ESD/AOD thin/ skim/ slim D 1 PD 1 st stage anal D n PD root histo ATLAS Analysis Model – analyzer view T2 T3 T0/T1 Jim Cochran’s slide about the Analysis Model

 Design a system to be flexible and simple to setup (1 person < 1 week)  Simple to operate - < 0.25 FTE to maintain  Scalable with Data volumes  Fast - Process 1 TB of data over night  Relatively inexpensive ◦ Run only the needed services/process ◦ Devote most resources to CPU’s and Disk  Using common tools will make it easier for all of us ◦ Easier to develop a self supporting community.

Users Interactive node Pathena/Prun – to grid User’s home area Atlas/common software Atlas data File server Design done – Instructions on wiki first week November The Grid NFSV4 mounts

US Tier 2 Cloud 2 T3g’s in Tiers of Atlas (Duke and ANL) -Part of throughput testing -Asked other T3g’s to setup their SE’s (all are welcome/encouraged to setup SE) Recent through put test with ANL SE – ( > 500 Mb/s ) Shows $1200 PC (Intel i7 chip/ X58 chipset/ SL5.3) can be a SE for a small T3. Bestman Storage Resource Manager (SRM) (fileserver) Installation Instructions in wiki (your comments encouraged) Data will come from any Tier 2 site As T3 Storage Elements come online and are tested – will be added to Atlas Data Management System (DDM) as Tiers of Atlas –

Users Interactive node User jobs Condor master Condor Design done – Instructions on wiki first week November User jobs AOD analysis: 1 10Hz > 139 hrs Many cores needed  Common user interface to batch system simplifies users’ work ARCOND  ANL has developed such an interface ARCOND  Well tested on their system  Will need to be adapted for other Tier 3 sites You can help by testing it at your site LFS sites can help – new batch system

File servers Worker nodes with little local storage Network Data flows Storage on worker nodes XRootD can be used to manage either type of storage Draft of XRootD installation instructions exist

Distributed data advantages (Data stored on worker nodes) Disk on worker node cheaper than in dedicated file servers

 Tier 3 installation instructions before Jan ’10 ◦ You can help by reading instructions and provide us feedback (as you try them out)  Starting to bring on line Storage Elements. ◦ Need know limits of Atlas DDM before the crunch ◦ The more sites the better  Tier 3’s are connected to the Tier 1-2 cloud through the Storage Elements – focus of integration with the US Atlas computing facilities.  Once ARRA computing funds arrive will focus effort on sites starting from scratch initially

 US Atlas provides ½ person for T3 Support  Open Science Grid (OSG) will help  Our software providers are helping (Condor Team, XrootD team) ◦ Rik and Doug just spent 2 days working with OSG and Condor team to setup and provide documentation for US Atlas Tier 3’s.  Tier 3’s will be community supported ◦ US Atlas Hypernews - ◦ US Atlas Tier 3 trouble ticket at BNL 

 Continue to investigate technologies to make Tier 3’s more efficient and easier to manage ◦ Virtual machine effort continues.  Collaboration between BNL, Duke, LBL, UTA, CERN (and soon OSU)  Bestman-gateway SRM running in VM at BNL as proof of concept. (Will tune it for performance)  XRootD redirector next  Condor part of CERNVM  ANL will setup integration and test clusters ◦ Integration cluster – small Tier 3 – test code there before recommending upgrades in rest of Tier 3 cloud.

 Tier 3’s are our tool for data analysis ◦ We need your feedback on the design, configuration and implementation ◦ Many tasks exist where you can help now  Tier 3’s will be community supported ◦ We should standardize as much as possible  Use common tools (modify existing tools to make them more inclusive)  Next 5 months are planning on a flurry of Tier 3 activity. ◦ Your involvement now will pay off very soon ◦ Thanks for attending this meeting it shows people are getting involved