New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington, 2009-11-12 1.

Slides:



Advertisements
Similar presentations
Potential Data Access Architectures using xrootd OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
Advertisements

Copy on Demand with Internal Xrootd Federation Wei Yang SLAC National Accelerator Laboratory Create Federated Data Stores for the LHC IN2P3-CC,
Duke and ANL ASC Tier 3 (stand alone Tier 3’s) Doug Benjamin Duke University.
Network+ Guide to Networks, Fourth Edition Chapter 10 Netware-Based Networking.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Distributed File Systems Steve Ko Computer Sciences and Engineering University at Buffalo.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
Setup your environment : From Andy Hass: Set ATLCURRENT file to contain "none". You should then see, when you login: $ bash Doing hepix login.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
SRM at Clemson Michael Fenn. What is a Storage Element? Provides grid-accessible storage space. Is accessible to applications running on OSG through either.
Scalla/xrootd Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 19-May-09 ANL Tier3(g,w) Meeting.
Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 29-October-09 ATLAS Tier 3 Meeting at ANL
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
Multi-Tiered Storage with Xrootd at ATLAS Western Tier 2 Andrew Hanushevsky Wei Yang SLAC National Accelerator Laboratory 1CHEP2012, New York
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
Scalla/xrootd Introduction Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 6-April-09 ATLAS Western Tier 2 User’s Forum.
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1 cmsd xrootd cmsd xrootd cmsd xrootd cmsd xrootd Client Client A small 2-level cluster. Can.
SLAC Experience on Bestman and Xrootd Storage Wei Yang Alex Sim US ATLAS Tier2/Tier3 meeting at Univ. of Chicago Aug 19-20,
11-July-2008Fabrizio Furano - Data access and Storage: new directions1.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Xrootd Monitoring Atlas Software Week CERN November 27 – December 3, 2010 Andrew Hanushevsky, SLAC.
Experience with the Thumper Wei Yang Stanford Linear Accelerator Center May 27-28, 2008 US ATLAS Tier 2/3 workshop University of Michigan, Ann Arbor.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
July-2008Fabrizio Furano - The Scalla suite and the Xrootd1.
UMD TIER-3 EXPERIENCES Malina Kirn October 23, 2008 UMD T3 experiences 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
 CASTORFS web page - CASTOR web site - FUSE web site -
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
02-June-2008Fabrizio Furano - Data access and Storage: new directions1.
Stephen Burke – Data Management - 3/9/02 Partner Logo Data Management Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
ATLAS XRootd Demonstrator Doug Benjamin Duke University On behalf of ATLAS.
Scalla/xrootd Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 08-June-10 ANL Tier3 Meeting.
Scalla Advancements xrootd /cmsd (f.k.a. olbd) Fabrizio Furano CERN – IT/PSS Andrew Hanushevsky Stanford Linear Accelerator Center US Atlas Tier 2/3 Workshop.
Chapter 10: File-System Interface Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 1, 2005 File-System Interface.
US ATLAS Western Tier 2 Status Report Wei Yang Nov. 30, 2007 US ATLAS Tier 2 and Tier 3 workshop at SLAC.
David Adams ATLAS ATLAS distributed data management David Adams BNL February 22, 2005 Database working group ATLAS software workshop.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Scalla As a Full-Fledged LHC Grid SE Wei Yang, SLAC Andrew Hanushevsky, SLAC Alex Sims, LBNL Fabrizio Furano, CERN SLAC National Accelerator Laboratory.
EGI-Engage Data Services and Solutions Part 1: Data in the Grid Vincenzo Spinoso EGI.eu/INFN Data Services.
1 Xrootd-SRM Andy Hanushevsky, SLAC Alex Romosan, LBNL August, 2006.
11-June-2008Fabrizio Furano - Data access and Storage: new directions1.
T3g software services Outline of the T3g Components R. Yoshida (ANL)
Bestman & Xrootd Storage System at SLAC Wei Yang Andy Hanushevsky Alex Sim Junmin Gu.
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
Introduce Caching Technologies using Xrootd Wei Yang 10/28/14ATLAS TIM 2014 Univ. Chicago1.
Data Analysis w ith PROOF, PQ2, Condor Data Analysis w ith PROOF, PQ2, Condor Neng Xu, Wen Guan, Sau Lan Wu University of Wisconsin-Madison 30-October-09.
09-Apr-2008Fabrizio Furano - Scalla/xrootd status and features1.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
An Introduction to GPFS
OSG STORAGE OVERVIEW Tanya Levshina. Talk Outline  OSG Storage architecture  OSG Storage software  VDT cache  BeStMan  dCache  DFS:  SRM Clients.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Solutions for WAN data access: xrootd and NFSv4.1 Andrea Sciabà.
Magda Distributed Data Manager Torre Wenaus BNL October 2001.
KIT - University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association Xrootd SE deployment at GridKa WLCG.
Federating Data in the ALICE Experiment
a brief summary for users
Vincenzo Spinoso EGI.eu/INFN
Blueprint of Persistent Infrastructure as a Service
Berkeley Storage Manager (BeStMan)
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Scalla/XRootd Advancements
Presentation transcript:

New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,

New features of Xrootd:  Support space tokens up to 63 characters  Previously 15 characters  In production at SLAC  Better support of composite name space (CNS)  Easy recovery of missing entries on CNS  Space usage auditing  File inventory records: location (host) and other attributes of real data files  Inventory feature conflicts with the current Xrootd SE. Several choice to address this conflict.  Monitoring  Monitoring data is XML based  Sent to and collected by a 3 rd host to feed to Ganglia, Nagios, etc. 2

3 New features of Xrootd, cont’d  Client lib no longer repeat identical large reads Existing clients in ATLAS releases can overload Xrootd data servers in this way when large read ahead is turn on  Multiple read ahead/streaming algorithms When will it be integrated into ROOT and ATLAS releases?

New features of XrootdFS  Improved performance under small I/O blocks  Read ahead managed by Linux kernel (configuration changes)  Implement a write cache to capture small sequential writes  Directory browsing & usage query w/o CNS  CNS has been stable at SLAC  XrootdFS without CNS can’t tell if a data server is down  Tier 3 users seems to be confused by CNS. Easy to make mistakes  New XrootdFS in light usage at SLAC, should we push it to OSG?  No CNS, no conflict with the inventory feature 4

New features of Bestman-gateways  Support Adler32  Built-in Adler32 calculation  Allow site specific external tool for calculate Adler32  Turn directory browsing on and off  Support non-GridFTP protocols Interested by ATLAS to access conditions files via site specific protocols 5

Xrootd SE Components  Bestman Gateway  T2/T3g  XrootdFS  For users and minimum T3g Usage is like NFS Based on Xrootd Posix library and FUSE BeStMan, dq2 clients, and Unix tools need it  GridFTP for Xrootd  WT2 for a while Globus GridFTP + Data Storage Interface (DSI) module for Xrootd/Posix  Xrootd Core  All Babar needed is this layer Redirector, data servers, xrdcp More functionality 6

User interface to Xrootd  TXNetFile class (C++ and ROOT CINT) Fault tolerance High performance thought intelligent logics in TXNetFile and server  Command line tools  xrdcp simple, native, light weight, high performance  Xrootd Posix preload library export LD_PRELOAD=/…/libXrdPosixPreload.so ls/cat/cp/file root://redirector:port//path/file  XrootdFS Mount the Xrootd cluster on client host’s local file system tree 7

Accessing Xrootd data from ATLAS jobs  Copy input data from Xrootd to local disk on WN A wrapper script using xrdcp, or cp + xrootd posix preload library Panda production jobs at SLACXRD work this way.  Read ROOT files directly from Xrootd storage Identify ROOT file using Unix ‘file’ command (w/ posix preload library) Copy non-ROOT files to local disk on WN Put ROOT file’s xroot URL (root://…) in PoolFileCatalog.xml Athena uses TXNetFile class to read ROOT file ANALY_SLAC and ANALY_SWT2_CPB use this mixed accessing mode. Both need a set of tools for copying, deleting, file id and checksum  Mount XrootdFS on all batch nodes All files appear under local file system tree. None of the above is needed Untested: XrootdFS came out after SLAC sites were established. 8

9 Potential usage of Xrootd SE technology High performance xrdcp over WAN and BitTorrent style data transfer allows  Super Xrootd Clusters (federation) across geographic sites  Treat Xrootd SE at other sites as virtual MSS Using Xrootd MSS interface to fetch data from non-Xrootd SEs ? See Andy Hanushevsky’s talk at ANL Tier 3 meeting

10 Virtual Mass Storage System cmsd xrootd XYZ cmsd xrootd UTA cmsd xrootd BNL all.role meta manager all.manager meta atlas.bnl.gov:1312root://atlas.bnl.gov/includes SLAC, UTA, XYZ xroot clusters Meta Managers can be geographically replicated! cmsd xrootd SLAC all.manager meta atlas.bnl.gov:1312 all.role manager