DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.

Slides:



Advertisements
Similar presentations
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Advertisements

Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
Applying Data Grids to Support Distributed Data Management Storage Resource Broker Reagan W. Moore Ian Fisk Bing Zhu University of California, San Diego.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
10 May 2007 HTTP - - User data via HTTP(S) Andrew McNab University of Manchester.
16 th May 2006Alessandra Forti Storage Alessandra Forti Group seminar 16th May 2006.
1 School of Computer, National University of Defense Technology A Profile on the Grid Data Engine (GridDaEn) Xiao Nong
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
Xrootd, XrootdFS and BeStMan Wei Yang US ATALS Tier 3 meeting, ANL 1.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Evolution, by tackling new challenges| CHEP 2015, Japan | Patrick Fuhrmann | 16 April 2015 | 1 Patrick Fuhrmann On behave of the project team Evolution,
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Introduction to HDFS Prasanth Kothuri, CERN 2 What’s HDFS HDFS is a distributed file system that is fault tolerant, scalable and extremely easy to expand.
Remote Control Room and SAM DH Shifts at KISTI for CDF Experiment 김현우, 조기현, 정민호 (KISTI), 김동희, 양유철, 서준석, 공대정, 김지은, 장성현, 칸 아딜 ( 경북대 ), 김수봉, 이재승, 이영장, 문창성,
 CASTORFS web page - CASTOR web site - FUSE web site -
Redirector xrootd proxy mgr Redirector xrootd proxy mgr Xrd proxy data server N2N Xrd proxy data server N2N Global Redirector Client Backend Xrootd storage.
Status & Plan of the Xrootd Federation Wei Yang 13/19/12 US ATLAS Computing Facility Meeting at 2012 OSG AHM, University of Nebraska, Lincoln.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
Introduction to HDFS Prasanth Kothuri, CERN 2 What’s HDFS HDFS is a distributed file system that is fault tolerant, scalable and extremely easy to expand.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Future home directories at CERN
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
CERN IT Department CH-1211 Geneva 23 Switzerland GT HTTP solutions for data access, transfer, federation Fabrizio Furano (presenter) on.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
EGI-Engage Data Services and Solutions Part 1: Data in the Grid Vincenzo Spinoso EGI.eu/INFN Data Services.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
DMLite GridFTP frontend Andrey Kiryanov IT/SDC 13/12/2013.
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
IT-SDC : Support for Distributed Computing Dynafed FTS3 Human Brain Project use cases Fabrizio Furano Alejandro Alvarez.
Martina Franca (TA), 07 November Installazione, configurazione, testing e troubleshooting di Storage Element.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
Centre de Calcul de l’Institut National de Physique Nucléaire et de Physique des Particules Data storage services at CC-IN2P3 Jean-Yves Nief.
An Introduction to GPFS
New Features of Xrootd SE Wei Yang US ATLAS Tier 2/Tier 3 meeting, University of Texas, Arlington,
Sync n share and QoS in dCache | NEC 2015, Montenegro| Patrick Fuhrmann | 02 Oct 2015 | 1 Patrick Fuhrmann On behave of the dCache team by especially Tigran.
A. Sim, CRD, L B N L 1 Production Data Management Workshop, Mar. 3, 2009 BeStMan and Xrootd Alex Sim Scientific Data Management Research Group Computational.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Standard Protocols in DPM Ricardo Rocha.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Solutions for WAN data access: xrootd and NFSv4.1 Andrea Sciabà.
Dynamic Federation of Grid and Cloud Storage Fabrizio Furano, Oliver Keeble, Laurence Field Speaker: Fabrizio Furano.
Grid Data Access: Proxy Caches and User Views EGI Technical Forum 19 September 2011 Jan Just Keijser Cristian Cirstea Jeff Templon.
CERN IT-Storage Strategy Outlook Alberto Pace, Luca Mascetti, Julien Leduc
Federating Data in the ALICE Experiment
a brief summary for users
Scalable sync-and-share service with dCache
Dynamic Storage Federation based on open protocols
Vincenzo Spinoso EGI.eu/INFN
Blueprint of Persistent Infrastructure as a Service
Status of the SRM 2.2 MoU extension
dCache “Intro” a layperson perspective Frank Würthwein UCSD
Dynafed, DPM and EGI DPM workshop 2016 Speaker: Fabrizio Furano
Experiences with http/WebDAV protocols for data access in high throughput computing
StoRM Architecture and Daemons
GFAL 2.0 Devresse Adrien CERN lcgutil team
WLCG Demonstrator R.Seuster (UVic) 09 November, 2016
Ákos Frohner EGEE'08 September 2008
Enabling High Speed Data Transfer in High Energy Physics
DCache things Paul Millar … on behalf of the dCache team.
Support for ”interactive batch”
INFNGRID Workshop – Bari, Italy, October 2004
Summary of the dCache workshop
Presentation transcript:

dCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev

What is dCache Distributed multi-petabyte scalable disk storage system with single rooted filesystem providing location independent file access – Can be interfaced with tertiary storage serving as cache front-end to optimize I/O dCache.org is a collaboration between DESY, Fermilab and NDGF dCache is an open source (github.com/dCache/dcache) Dmitry LitvintsevFIFE workshop2

dCache worldwide Dmitry LitvintsevFIFE workshop3 More that 70 installations, with total storage volume of ~ 120 PB

Fermilab Long and successful history: – Run II : CDF (in production since 2003) – General purpose Public dCache (in production since 2004) – CMS (since 2005) – D0 (in production since 2014) Fall 2013: public dCache expanded by a factor of 20 to meet needs of IF experiments (a part of FIFE program) Dmitry LitvintsevFIFE workshop4

Read transfer volume scaling Dmitry LitvintsevFIFE workshop5 x100 scale up

On-line space usage in dCache Dmitry LitvintsevFIFE workshop6 On-line space usage in public dCache by storage group (top 10) in GiBs Total used space currently is about 2PiBs

Public dCache (FNDCA) Dmitry LitvintsevFIFE workshop7 I/O Mgmt Data Servers

Access Protocols ProtocolURLAuthenticationClient(s) dcap dcap://fndca1.fnal.gov:{24125,24136,24137,24138} None dccp,dcap, root dcap://fndca1.fnal.gov:{24525,24536}GSI dcap://fndca1.fnal.gov:{24725,24736}Kerberos(GSS) FTPfndca1.fnal.gov:24126passwdftp fndca1.fnal.gov:24127kerberos(GSS) gFTPfndca1.fnal.gov:{2811,2812}GSIglobus-url-copy, srmcp, uberftp SRMsrm://fndca1.fnal.gov:8443GSIlcg-cp, srmcp… NFS v4.1stkensrv1n.fnal.gov:2049NonePOSIX I/O NFS v3stkensrv1n.fnal.gov:2049NonePOSIX metadata ops, POSIX I/O with dcap preload library WebDAVhttps://fndca1.fnal.gov:2880GSIBrowsers, curl, cadaver, davfs, root XRootD(x)root://fndca1.fnal.gov:1094GSIxrdcp, xrd, POSIX preload lobrary, FUSE, root Dmitry LitvintsevFIFE workshop8

Access from root Provided the proper plugin the access to files in dCache from root looks like: – dcap (no auth) on mounted namespace (NFS v3) using preload library mount -o nfsvers=3 stkensrv1n:/minos /pnfs/minos export LD_PRELOAD=/usr/lib64/libpdcap.so.1 root [0] f=TFile::Open(”/pnfs/minos/path/to/file"); – dcap (no auth) root [0] f=TFile::Open("dcap://fndca1.fnal.gov:24125/full/path/to/file"); – dcap (GSI Auth) {grid,voms}-proxy-init … export DCACHE_IO_TUNNEL=/usr/lib64/dcap/libgsiTunnel.so root [0] f=TFile::Open("dcap://fndca1.fnal.gov:24525/full/path/to/file"); – dcap (kerberos Auth) export DCACHE_IO_TUNNEL=/usr/lib64/dcap/libgssTunnel.so root [0] f=TFile::Open("dcap://fndca1.fnal.gov:24725/full/path/to/file”); Dmitry LitvintsevFIFE workshop9

Access from root NFS v4.1: mount -t nfs4 -o minorversion=1 stkensrv1n:/minos /pnfs/minos root [0] f=TFile::Open(”/pnfs/minos/path/to/file”); WebDAV (TDavixFile plugin) {grid,voms}-proxy-init.. root [0] f=TFile::Open(" Dmitry LitvintsevFIFE workshop10

XRootD Extended rootd (eXtended root daemon) File access and data transfer protocol – POSIX-like random access to arbitrary data organized in files of any type A reference implementation of – xrootd daemon provides access to distributed data servers – cmsd daemon clusters xrood daemons Provides fault tolerant, low latency, high bandwidth access to data Dmitry LitvintsevFIFE workshop11

dCache XRootD dCache provides fully functional xRootD server – Native Java implementation of xrootd protocol – Acts as any other dCache door. dCache door is xrootd redirector – Mover and pools are xrootd data servers Dmitry LitvintsevFIFE workshop12

XRootD door On public dCache we run XRootD door: root://fndca1.fnal.gov:1094 How to use: – Create {grid,voms} proxy {grid,voms}-proxy-init … – With root files: root [0] f=TFile::Open("root://fndca1.fnal.gov/pnfs/fnal.gov/usr/fermigrid/volatile/fer milab/litvinse/xib.root"); – With any files xrdcp xroot://fndca1.fnal.gov/pnfs/fnal.gov/usr/fermigrid/volatile/fermilab/litvinse/g ums1.txt. [xrootd] Total 0.00 MB|====================| % [0.0 MB/s] – Use xrd - xrootd file and directory meta-data utility Dmitry LitvintsevFIFE workshop13

XRootD usage from root Dmitry LitvintsevFIFE workshop14

xrd client Dmitry LitvintsevFIFE workshop15 Invoke it like so xrd fndca1.fnal.gov Type ‘help’ to get list of command.

dCache best practices For temporary files use scratch area /pnfs/ /scratch/users/ For permanent files: – Use large files (few GBs). Concatenate small files before putting them to dCache if possible – Use SFA (Small File Aggregation) with small files Pre-stage data before running on the grid Dmitry LitvintsevFIFE workshop16

Storage Federation Collection of heterogeneous storage resources managed by co-operating but independent administrative domains transparently accessible via a common namespace. Dmitry LitvintsevFIFE workshop17

xRootD Federated Storage (AAA take) FIFE workshop18Dmitry Litvintsev An XrootD based Framework built on top of existing storage

dCache + XrootD Federation Dmitry LitvintsevFIFE workshop19 /path/to/lfn ? TFC /path/to/lfn -> /mapped/path/to/pnf Got it Redirect xrootd.federation.host /path/to/lfn ? Redirect fndca1.fnal.gov Path/to/lfn ? Redirect o pool

HTTP Plugin for XRootD (Fabrizio Furano CERN IT-SDC/ID) XrdHTTP plugin provides support for HTTP(s) and WebDAV in XRootD framework Can share same port as XRootD protocol. Clients automatically detected. Support X509 auth., proxy certs. and VOMS extensions Any XRootD server keeps its advances functionality plus gets HTTP compliance IPv6 compatible Dmitry LitvintsevFIFE workshop20

Why HTTP/WebDAV? Covers most existing use cases Supports WAN access GET/PUT or direct chunked access Ubiquitous: – Available on all platforms – Many standard clients – Web browsers – Allow integration with Cloud at protocol level HTTP moves more data worldwide than HEP HTTP + Grid allow a browser to interact with GRID SE Dmitry LitvintsevFIFE workshop21

Dmitry LitvintsevFIFE workshop22

Dmitry LitvintsevFIFE workshop23

Dmitry LitvintsevFIFE workshop24

Dmitry LitvintsevFIFE workshop25

Conclusion dCache is actively used by FIFE experiments We run XRootD door We are starting to experiment with storage federations – XRootD – WebDAV Dmitry LitvintsevFIFE workshop26