ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.

Slides:



Advertisements
Similar presentations
ATLAS T1/T2 Name Space Issue with Federated Storage Hironori Ito Brookhaven National Laboratory.
Advertisements

Xrootd and clouds Doug Benjamin Duke University. Introduction Cloud computing is here to stay – likely more than just Hype (Gartner Research Hype Cycle.
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
Data management at T3s Hironori Ito Brookhaven National Laboratory.
AMOD Report Doug Benjamin Duke University. Running Jobs last 7 days 120K MC sim Users MC Rec Group.
Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Copyright © cs-tutorial.com. Overview Introduction Architecture Implementation Evaluation.
Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
CERN IT Department CH-1211 Geneva 23 Switzerland GT WG on Storage Federations First introduction Fabrizio Furano
Efi.uchicago.edu ci.uchicago.edu FAX status developments performance future Rob Gardner Yang Wei Andrew Hanushevsky Ilija Vukotic.
Summary of Data Management Jamboree Ian Bird WLCG Workshop Imperial College 7 th July 2010.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
PERFORMANCE AND ANALYSIS WORKFLOW ISSUES US ATLAS Distributed Facility Workshop November 2012, Santa Cruz.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
File Transfer And Access (FTP, TFTP, NFS). Remote File Access, Transfer and Storage Networks For different goals variety of approaches to remote file.
Efi.uchicago.edu ci.uchicago.edu Data Federation Strategies for ATLAS using XRootD Ilija Vukotic On behalf of the ATLAS Collaboration Computation and Enrico.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
US Atlas Tier 3 Overview Doug Benjamin Duke University.
Efi.uchicago.edu ci.uchicago.edu Ramping up FAX and WAN direct access Rob Gardner on behalf of the atlas-adc-federated-xrootd working group Computation.
U.S. ATLAS Facility Planning U.S. ATLAS Tier-2 & Tier-3 Meeting at SLAC 30 November 2007.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
Introduce Caching Technologies using Xrootd Wei Yang 10/28/14ATLAS TIM 2014 Univ. Chicago1.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
Wahid Bhimji (Some slides are stolen from Markus Schulz’s presentation to WLCG MB on 19 June Apologies to those who have seen some of this before)
ATLAS Off-Grid sites (Tier-3) monitoring A. Petrosyan on behalf of the ATLAS collaboration GRID’2012, , JINR, Dubna.
Data Management at Tier-1 and Tier-2 Centers Hironori Ito Brookhaven National Laboratory US ATLAS Tier-2/Tier-3/OSG meeting March 2010.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
ATLAS Tier3s Santiago González de la Hoz IFIC-Valencia S. González de la Hoz, II PCI 2009 Workshop, Valencia, 18/11/2010 ATLAS Tier3s (Programa de Colaboración.
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Atlas Tier 3 Overview Doug Benjamin Duke University.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Solutions for WAN data access: xrootd and NFSv4.1 Andrea Sciabà.
Efi.uchicago.edu ci.uchicago.edu FAX splinter session Rob Gardner Computation and Enrico Fermi Institutes University of Chicago ATLAS Tier 1 / Tier 2 /
Efi.uchicago.edu ci.uchicago.edu FAX status report Ilija Vukotic on behalf of the atlas-adc-federated-xrootd working group Computation and Enrico Fermi.
Federating Data in the ALICE Experiment
a brief summary for users
Jean-Philippe Baud, IT-GD, CERN November 2007
Doug Benjamin Duke University On Behalf of the Atlas Collaboration
Global Data Access – View from the Tier 2
BNL Tier1 Report Worker nodes Tier 1: added 88 Dell R430 nodes
Final summary Ian Bird Amsterdam, DAaM 18th June 2010.
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Brookhaven National Laboratory Storage service Group Hironori Ito
Ákos Frohner EGEE'08 September 2008
CTA: CERN Tape Archive Overview and architecture
Data Management cluster summary
Presentation transcript:

ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS

Preamble This talk represents the work of many people within ATLAS. Would like to recognize and thank them. – Idea – Massimo Lamanna – Coordination/ Tier 3 testing - DB – Tier 1/Tier 2 interface (coding/testing) – Charles Waldman, Rob Gardner, Brian Bockelman(CMS) – XRootd expert guidance/help – Andy Hanushevsky, Wei Yang, Dirk Duellman, Andreas Joachim Peters, Massimo Lamanna – ATLAS DDM - Angelos Molfetas, Simone Capanna, Hiro Ito – US ATLAS Tier 1/2 – Patrick McGuigan, Shawn McKee, Saul Youseef, John Brunelle, Hiro Ito, Wei Yang – ATLAS Tier 3 site (participate in immediate future) – Justin Ross (US), Santiago Gonzalez de la Hoz (Spain), Wahid Bhimji (UK), Andreas Penzold (Germany)

Description XRootd makes an excellent data discovery and data transfer system when used with a global name space (ATLAS DDM folks call it the Global Physical Dataset Path) It is used to augment and not replace the existing ATLAS DDM system Tier 1 and Tier 2 sites would act as seeds for the data sources (Read only) Tier 3 both fetch the data and serve it (sink and source – reducing load between T1/T2 and T3)

Advantages Tier 1 and Tier 2 sites can limit the bandwidth using XRootd (overloading the system not an issue) Tier 3 sites would participate in delivering data to other Tier 3 sites Popular data sets would be replicated as required based on usage (similar to ATLAS PD2P) Tier 3 sites co-located with Tier 2 (or Tier 1) sites could use XRootd data servers attached T2 storage for read only access Complex Tier 2 sites (those with separate physical locations) can share data to jobs through XRootd protocol CERN based physicists can easily fetch their own ROOT files from their home institute clusters to their local desktop

Required Steps Global name space convention and implementation (including client tools) Couple XRootd to LFC and local storage space at ATLAS DDM sites (Tier 1/Tier 2 and some Tier 3 sites) Local site caching of files at Tier 3’s Implement ATLAS data security policies

Global Physical Dataset Path (PDP) (from Angelos Molfetas slides last ATLAS software week) Background: – Requested for Tier-3 sites – Store datasets in a commonly accessible area Support in the dq2 clients: – Download to PDP using dq2-get – List datasets and files in PDP using dq2-ls – Base path and prefix can be specified Implications: – Lesser reliance on Site Services for Tier-3 – Lesser reliance on LFC for Tier-3 PDP example: DSN: mc10_7TeV Pythia_Gee_003_800.merge.AOD.e574_s933_s946_r1652_r1700_tid _00 PDP: /grid/atlas/dq2/mc10_7TeV/AOD/e574_s933_s946_r1652_r1700/mc10_7TeV Pythia_G ee_003_800.merge.AOD.e574_s933_s946_r1652_r1700_tid192840_00/ base path and prefix: {bpath}/{prefix}/mc10_7TeV/AOD/e574_s933_s946_r1652_r1700/mc10_7TeV Pythia_G ee_003_800.merge.AOD.e574_s933_s946_r1652_r1700_tid192840_00

LFC to global name space (PDP) coupling (from Charles Waldman’s slides last ATLAS software week) Use Xrd plug-in architeture to read from non-xroot storage (“fslib”), and also to map global namespace to local storage paths (“Name2Name”) [initial idea – Brian Bockelman] Supports dCache, xrootd, GPFS, Lustre, etc as storage backends Tier 3 sites can store files according to global namespace (no LFC) Code (dev)

(from Charles Waldman’s slides last ATLAS software week)

Data caching in and out of Tier 3 sites XRootd transfer service (FRM) plugin Transfer process calls simple shell script to fetch the data Currently testing with XRootd copy command xrdcp in simple bash script Could use another transfer command if needed Some Tier 3’s (and Tier 2 sites) have data servers on Private networks Transfer files to and from such a site using the XRootd Proxy service Proof of principle testing complete Able to copy files from Tier 2 sites to Tier 3 sites.

Current Status ATLAS DDM Client tools (dq2-get,dq2-ls) modified to use global physical dataset path (PDP) Deployed services at four Tier 2’s in US, 2 Tier 3’s in US, 1 Test site in US – MWT2, AGLT2 (dCache), SLAC, SWT2 (XRootd) – Will deploy at 5th NET2 (GPFS) shortly. Functional testing successful: – Xrd-native, Xrd-dcap, Xrd-direct pool, XRootd extreme copy Testing ganglia based monitoring for XRootd data flows out of Tier 2/3 sites. Tier 3 XRootd transfer service (FRM) testing – Transferred files automatically between Tier 3 sites – Native XRootd copy (xrdcp) – Triggered from user root analysis job – Can use sites with data clusters on private networks – need edge service (Proxy machine)

Plans Near term ( < 6 weeks) – Add a few addition Tier 2 sites in US and/or Europe – Add a few addition Tier 3 sites in US and Europe – Test Tier 3 site caching (for Lustre, GPFS back ends) – Add automated testing (HAMMER Cloud) – Would like to add additional global redirector in Europe (redundancy – reduce RTT for European sites) Longer term – Increase the number of Tier 3 sites testing the system on both sides of the Atlantic (10-15) – Authentication – Use X509 to allow authorized users access ATLAS data – Write activity triggered by trusted servers

Conclusions XRootd is a good match for data deliver amongst Tier 3 sites while reducing load on rest of ATLAS DDM system ATLAS has made much progress in developing the components needed to use such a system Should help us keep on top of users data needs

Backup Slides

14 Stand alone T3 - Caching files with frm #1 open(“/atlas/foo”); #2 Who has #2 Who has/atlas/foo? Xrdcp -x root://LR//atlas/foo /tmp The xrootd system does all of these steps automatically without application (user) intervention! Client App cmsdxrootd Remote Redirector DataFiles cmsd xrootd Data server B DataFiles cmsd xrootd Data server A Xfr agent frm cmsd xrootd Local Redirector DataFiles cmsd xrootd Remote Data server C #3 No local data servers #3 No local data servers have data it. B said it could get data initially /atlas/foo #9 Who has #9 Who has/atlas/foo? DataFiles cmsd xrootd Remote Data server D # 8 Who has /atlas/foo ? # 8 Who has /atlas/foo ? # 11 Try server C # 11 Try server C # 12 open /atlas/foo # 12 open /atlas/foo # 13 send /atlas/foo # 13 send /atlas/foo #4 Try B #4 Try B #5 open /atlas/foo #5 open /atlas/foo /atlas/foo cmsdxrootd Global Redirector #7 open (“/atlas/foo”) skip my redirector Same node # 10 I do! # 10 I do! #6 get /atlas/foo #6 get /atlas/foo #14 got /atlas/foo #14 got /atlas/foo

Direct Pool Access