Wahid Bhimji SRM; FTS3; xrootd; DPM collaborations; cluster filesystems.

Slides:



Advertisements
Similar presentations
Storage Review David Britton,21/Nov/ /03/2014 One Year Ago Time Line Apr-09 Jan-09 Oct-08 Jul-08 Apr-08 Jan-08 Oct-07 OC Data? Oversight.
Advertisements

Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
User Board - Supporting Other Experiments Stephen Burke, RAL pp Glenn Patrick.
Steve Traylen Particle Physics Department Experiences of DCache at RAL UK HEP Sysman, 11/11/04 Steve Traylen
Jens G Jensen CCLRC/RAL hepsysman 2005Storage Middleware SRM 2.1 issues hepsysman Oxford 5 Dec 2005.
Storage: Futures Flavia Donno CERN/IT WLCG Grid Deployment Board, CERN 8 October 2008.
Data & Storage Management TEGs Summary of recommendations Wahid Bhimji, Brian Bockelman, Daniele Bonacorsi, Dirk Duellmann GDB, CERN 18 th April 2012.
Filesytems and file access Wahid Bhimji University of Edinburgh, Sam Skipsey, Chris Walker …. Apr-101Wahid Bhimji – Files access.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
INFSO-RI Enabling Grids for E-sciencE SRMv2.2 experience Sophie Lemaitre WLCG Workshop.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
StoRM Some basics and a comparison with DPM Wahid Bhimji University of Edinburgh GridPP Storage Workshop 31-Mar-101Wahid Bhimji – StoRM.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
Your university or experiment logo here NextGen Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies,
PhysX CoE: LHC Data-intensive workflows and data- management Wahid Bhimji, Pete Clarke, Andrew Washbrook – Edinburgh And other CoE WP4 people…
Storage Wahid Bhimji DPM Collaboration : Tasks. Xrootd: Status; Using for Tier2 reading from “Tier3”; Server data mining.
CERN IT Department CH-1211 Geneva 23 Switzerland t Storageware Flavia Donno CERN WLCG Collaboration Workshop CERN, November 2008.
Your university or experiment logo here Storage and Data Management - Background Jens Jensen, STFC.
Wahid, Sam, Alastair. Now installed on production storage Edinburgh: srm.glite.ecdf.ed.ac.uk  Local and global redir work (port open) e.g. root://srm.glite.ecdf.ed.ac.uk//atlas/dq2/mc12_8TeV/NTUP_SMWZ/e1242_a159_a165_r3549_p1067/mc1.
Data and Storage Evolution in Run 2 Wahid Bhimji Contributions / conversations / s with many e.g.: Brian Bockelman. Simone Campana, Philippe Charpentier,
Your university or experiment logo here GridPP Storage Future Jens Jensen GridPP workshop RHUL, April 2010.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
MW Readiness Verification Status Andrea Manzi IT/SDC 21/01/ /01/15 2.
CERN IT Department CH-1211 Geneva 23 Switzerland GT WG on Storage Federations First introduction Fabrizio Furano
MW Readiness WG Update Andrea Manzi Maria Dimou Lionel Cons 10/12/2014.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
WebFTS File Transfer Web Interface for FTS3 Andrea Manzi On behalf of the FTS team Workshop on Cloud Services for File Synchronisation and Sharing.
Storage Federations and FAX (the ATLAS Federation) Wahid Bhimji University of Edinburgh.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Data Management cluster summary Krzysztof Nienartowicz JRA1 All Hands meeting, Helsinki.
LHCb T2D sites A.Tsaregorodtsev, CPPM. Why T2D sites for LHCb  The T2D concept introduced in 2013  to allow non-T1 country sites to controbute storage.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM Collaboration Motivation and proposal Oliver Keeble CERN On.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT DPM / LFC and FTS news Ricardo Rocha ( on behalf of the IT/GT/DMS.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
CERN IT Department CH-1211 Geneva 23 Switzerland GT HTTP solutions for data access, transfer, federation Fabrizio Furano (presenter) on.
Storage Interfaces Introduction Wahid Bhimji University of Edinburgh Based on previous discussions with Working Group: (Brian Bockelman, Simone Campana,
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
EGI-Engage Data Services and Solutions Part 1: Data in the Grid Vincenzo Spinoso EGI.eu/INFN Data Services.
GridPP storage status update Joint GridPP Board Deployment User Experiment Update Support Team, Imperial 12 July 2007,
Storage Interfaces and Access pre-GDB Wahid Bhimji University of Edinburgh On behalf of all those who participated.
Efi.uchicago.edu ci.uchicago.edu Storage federations, caches & WMS Rob Gardner Computation and Enrico Fermi Institutes University of Chicago BigPanDA Workshop.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
Testing Infrastructure Wahid Bhimji Sam Skipsey Intro: what to test Existing testing frameworks A proposal.
Storage Interfaces and Access: Interim report Wahid Bhimji University of Edinburgh On behalf of WG: Brian Bockelman, Philippe Charpentier, Simone Campana,
Grid Deployment Board 5 December 2007 GSSD Status Report Flavia Donno CERN/IT-GD.
Wahid Bhimji (Some slides are stolen from Markus Schulz’s presentation to WLCG MB on 19 June Apologies to those who have seen some of this before)
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
The HEPiX IPv6 Working Group David Kelsey HEPiX, Prague 26 April 2012.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
SRM 2.2: experiment requirements, status and deployment plans 6 th March 2007 Flavia Donno, INFN and IT/GD, CERN.
The HEPiX IPv6 Working Group David Kelsey (STFC-RAL) EGI OMB 19 Dec 2013.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Outcome should be a documented strategy Not everything needs to go back to square one! – Some things work! – Some work has already been (is being) done.
DPM: Future Proof Storage Ricardo Rocha ( on behalf of the DPM team ) EMI INFSO-RI
DPM in FAX (ATLAS Federation) Wahid Bhimji University of Edinburgh As well as others in the UK, IT and Elsewhere.
EMI is partially funded by the European Commission under Grant Agreement RI Future Proof Storage with DPM Oliver Keeble (on behalf of the CERN IT-GT-DMS.
Gene Oleynik, Head of Data Storage and Caching,
WLCG IPv6 deployment strategy
Andrea Manzi, Oliver Keeble
Storage Interfaces and Access: Introduction
Taming the protocol zoo
EGI UMD Storage Software Repository (Mostly former EMI Software)
DPM releases and platforms status
DCache things Paul Millar … on behalf of the dCache team.
Data Management cluster summary
Andrea Manzi, Oliver Keeble
Presentation transcript:

Wahid Bhimji SRM; FTS3; xrootd; DPM collaborations; cluster filesystems

SRM is currently required on all WLCG storage It has limitations; not much of the spec is used Some (eg. CERN!) are talking about not using it There is a WLCG WG to monitor alternatives (ensure interoperation; limit proliferation; etc.) BUT ATLAS and LHCb require development to get away from SRM and some issues are not solved So.. Storage for coming years needs a stable SRM interface. In future it may not – there will be an interface of some sort – but it will be lighter (I hope). FTS already supports gridftp-only endpoints and FTS3 will also offer http and xrootd. Xrootd use is expanding Big interest is federated storage –failover and anydata anywhere (Other solutions e.g http can offer this and are not hep specific) CMS is asking all sites to have xrootd interface by end of year ATLAS is also pushing deployment – but use cases not clear…

DPM support at CERN decreasing from current (v. good) level CERN asking for collaborators to continue to maintain DPM They say they will provide minimal support even without collaboration (bug fixes etc.) Collaboration also has advantages in terms of getting needed developments On the other hand – landscapes change: dCache is maybe easier to use than before; StoRM maybe more stable; Lustre and HDFS are well established Next years shutdown _may_ also be an opportunity to try something different Though DPM also offering DMLite ontop of Lustre/HDFS

Join collaboration (~1FTE or 2 x 0.5) Do we have the skills for core development? Does DPM have a long term (support) future? Is the shutdown a chance to move to something better (e.g. for hotfiles) Move to something else dCache; Storm/Lustre; DMLite/Lustre; DMLite/HDFS Migrating data (for ATLAS a recopy is fine but there is bound to be some hassle) Migrating storage (onto something new / unfamiliar) lot of work – especially for smaller sites. Have a lot of DPM experience (e.g. tuning) so alternative may not work out better for us

Need to try dpm development to see how easy it is (e.g. with DMLite) Need criteria if comparing alternatives. E.g: Transition effort Maintenance effort For our use cases: Stability Functionality Performance (inc. ease of tuning) Both of these take time (i.e. six months evaluating could be spent training in DPM) Site-admin view should have high weight...