Tier 3 Storage Survey and Status of Tools for Tier 3 access, futures Doug Benjamin Duke University.

Slides:



Advertisements
Similar presentations
Network Measurements Session Introduction Joe Metzger Network Engineering Group ESnet Eric Boyd Deputy Technology Officer Internet2 July Joint.
Advertisements

CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Evaluating Web Server Log Analysis Tools David Strom SD’98 2/13/98.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
A. Mohapatra, HEPiX 2013 Ann Arbor1 UW Madison CMS T2 site report D. Bradley, T. Sarangi, S. Dasu, A. Mohapatra HEP Computing Group Outline  Infrastructure.
FZU participation in the Tier0 test CERN August 3, 2006.
What is Going on with Kuali? Jennifer Foutty Executive Director, Kuali Foundation.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Data management at T3s Hironori Ito Brookhaven National Laboratory.
Tier 3(g) Cluster Design and Recommendations Doug Benjamin Duke University.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
How to Install and Use the DQ2 User Tools US ATLAS Tier2 workshop at IU June 20, Bloomington, IN Marco Mambelli University of Chicago.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
Education and Outreach QuarkNet Randy Ruchti USCMS Education Coordinator USCMS Annual Meeting, Florida State University May 2002.
DDM-Panda Issues Kaushik De University of Texas At Arlington DDM Workshop, BNL September 29, 2006.
New perfSonar Dashboard Andy Lake, Tom Wlodek. What is the dashboard? I assume that everybody is familiar with the “old dashboard”:
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Tier 3 architecture Doug Benjamin Duke University.
PanDA Update Kaushik De Univ. of Texas at Arlington XRootD Workshop, UCSD January 27, 2015.
DDM Monitoring David Cameron Pedro Salgado Ricardo Rocha.
Stefano Belforte INFN Trieste 1 Middleware February 14, 2007 Resource Broker, gLite etc. CMS vs. middleware.
OSG Technology Area Brian Bockelman Area Coordinator’s Meeting February 15, 2012.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Karsten Köneke October 22 nd 2007 Ganga User Experience 1/9 Outline: Introduction What are we trying to do? Problems What are the problems? Conclusions.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
1 CMS Software Installation, Bockjoo Kim, 23 Oct. 2008, T3 Workshop, Fermilab CMS Commissioning and First Data Stan Durkin The Ohio State University for.
US Atlas Tier 3 Overview Doug Benjamin Duke University.
New Qdos Website Survey Presented to Qdos users. 02/03/2016© British Gas Trading Limited 2011Slide 2 1 The Website.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
A. Mohapatra, T. Sarangi, HEPiX-Lincoln, NE1 University of Wisconsin-Madison CMS Tier-2 Site Report D. Bradley, S. Dasu, A. Mohapatra, T. Sarangi, C. Vuosalo.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Finding Data in ATLAS. May 22, 2009Jack Cranshaw (ANL)2 Starting Point Questions What is the latest reprocessing of cosmics? Are there are any AOD produced.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Consistency Checking And RUCIO Progress Update Sarah Williams Indiana University ADC Weekly Meeting,
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Discussion on data transfer options for Tier 3 Tier 3(g,w) meeting at ANL ASC – May 19, 2009 Marco Mambelli – University of Chicago
Any Data, Anytime, Anywhere Dan Bradley representing the AAA Team At OSG All Hands Meeting March 2013, Indianapolis.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
LCG Accounting Update John Gordon, CCLRC-RAL 10/1/2007.
Atlas Tier 3 Overview Doug Benjamin Duke University.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
1 Policy Based Systems Management with Puppet Sean Dague
a brief summary for users
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
David Adams Brookhaven National Laboratory September 28, 2006
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Quality Control in the dCache team.
Storage elements discovery
Brookhaven National Laboratory Storage service Group Hironori Ito
SAM Offsite Coordination
Summary of the dCache workshop
Presentation transcript:

Tier 3 Storage Survey and Status of Tools for Tier 3 access, futures Doug Benjamin Duke University

Tier 3 Storage Survey questions 1.What is the Name of your Institution? 2. What sort of Storage technology do you use at your Tier 3? (please select all that apply) 3. Would you like to federate your storage with other Tier 1, Tier 2 and Tier 3 sites within the US as part of federated xrootd? 4. Do you have a gridftp server(s) at your Tier 3? 5. Do you have an SRM at your site? 6. If you have a gridftp server(s) or SRM at your site is this registered with OSG ? 7. If you answered yes to the previous question, what is the name of your OSG Resource Group (it should be your site name)? see: Is your Site in the Tiers of ATLAS? 9. Is there anything that can be done to make storage easier for you? (Please add comments if you wish)

Tier 3 Institutions co-located at Tier 2 sites University of Chicago University of Michigan UT Arlington Indiana University* (located on local campus) University of Oklahoma Michigan State U. University of Illinois at Urbana-Champaign SLAC Tier 3 Stand alone Tier 3 sites University of Oregon Univ of South Carolina SUNY Albany Hampton University (Grid site) Tufts University (Grid site) Louisiana Tech University Columbia New York University Wisconsin (Tier 3gs) California State University Fresno NIU Univ. of California, Santa Cruz Univ. of Pennsylvania Univ. of Massachusetts, Amherst SMU (Grid ) Stony Brook University University of Washington UT Dallas (Tier 3gs) ANL Duke University 28 out of ~ 42 sites, 20 Stand alone T3’s 8 co-located

What sort of Storage technology do you use at your Tier 3? Other: XFS iSCSI, Open-E, We have deprecated our dCache system, AFS, Local disks cataloged by arcond

Would you like to federate your storage with other Tier 1, Tier 2 and Tier 3 sites within the US as part of federated xrootd? Would you like to federate your storage with other Tier 1, Tier 2 and Tier 3 sites within the US as part of federated xrootd?

Do you have a gridftp server(s) at your Tier 3? Do you have an SRM at your site?

If you have a gridftp server(s) or SRM at your site is this registered with OSG ? Hampton_ATLAS_CE Tufts_ATLAS_Tier3 LONI_OSG1 NYU ATLAS Tier3 WISC-ATLAS AGLT2 Penn ATLAS Tier 3 UMass_Tier3 SMU_HPC OUHEP_OSG MSU-OSG IllinoisHEP utd-hep WT2_SE DukeT3

Is your Site in the Tiers of ATLAS?

Is there anything that can be done to make storage easier for you? Make sure the xrootd installation docs are all up to date. Also, I know there was some effort the tier3 group was making to get puppet installs running. That'd be hugely helpful (though I perhaps xrootd couldn't be configured using that anyway). Any suggestions about performance monitoring tools/suites? We will likely be installing a gridftp server in front of the xroot system. At the moment we are reasonably happy with our storage. We are using CVMFS for both Tier-2 and Tier-3. Having users get the needed tools/environment via CVMFS would be nice, especially if the 'Federated Xrootd' software/config were in place to allow "transparent" access to data for Tier-3 users. More tutorials on xrootd

Is there anything that can be done to make storage easier for you? 1) improve the choice of where to get (via dq2-get) the data, given my location. At least choose US vs overseas sites, but better yet learn (or let me choose a default set of preferences). 2) improve the error recovery in dq2-get for sites that claim to have complete datasets, but consistently or occasionally fail to get one or more files. At least make a "summary file" which is empty if nothing failed, and lists exactly which files had errors. But better yet, try another site. Documentation and tutorial. A lot of terms and the general lexicon of Tier3 became more like Tier2's and to understand even this questionnaire requires a great deal of IT knowledge, well beyond the average capabilities of physicists (especially students or posdocs) who can spend a small fraction of their time on Tier3 while doing physics. I do not remember this was intended in the original Tier3 report from 2008 where there was several classifications of Tier3. I would really like to see more choices, especially if someone to choose to simplify Tier3 to barebone Linux environment with the standard built-in Linux tools (if possible), without going into the business of gridFTP/SRM/xrootrd it - not every Tier3 can crack all such Tier2-like services!

How do users get data to their Tier 3? Client tools – dq2-get - Last summer, Eric Lan Ć on showed it is very popular – plan to repeat measurement. Dataset subscription through Tier of Atlas site o 7 sites – ANL (test site for now), Duke, NERSC, Penn, NERSC, SMU, UTD, Wisc o Users still use dq2-get at some of these sites o Requires SRM and gridftp servers and LFC Users have requested to still get subscribed data with much less overhead o Besides Hiro doesn’t want us to muck up the Tier 3 LFC too much

Upcoming changes to ATLAS DDM based on Tier 3 feedback Last software week, we held a well attend ad-hoc meeting in Rest. 1 to discuss the local analysis Tier 3 DDM needs Angelos Molfetas provided a summary of T3 data management issues: Link to Angelos' talkLink to Angelos' talk Site services changes : o Use FTS with gridFTP o SRM not required, no LFC registration o Can be throttled o gridFTP-ls vs LFC look-up during transfer o Replica removed from DDM catalog once transfer is complete

Current Status DDM team has developed the code changes for SRM less and “registration less” site services requiring only a gridftp server Initial testing has begin Expect to continue to test a sites within the US this Fall Should we expect most US Tier 3 sites to setup gridftp servers?

Conclusions 2/3 of all possible Tier 3 sites responded NFS used at almost all sites XRootD used at > 70% responding sites XRootD federation needs better advertising Users want easy/efficient mechanisms to get their data Analysis only Tier 3 sites, wanted different Site Services and DDM listened and new changes are coming Are we ready for all of these new DDM sites? o What is the US ATLAS policy?