Data management demonstrators Ian Bird; WLCG MB 18 th January 2011.

Slides:



Advertisements
Similar presentations
Applications Area Issues RWL Jones GridPP13 – 5 th June 2005.
Advertisements

HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
Data Management TEG Status Dirk Duellmann & Brian Bockelman WLCG GDB, 9. Nov 2011.
Global Strategic Planning Meeting for Teacher Training on Human Rights Education Evaluation Results — Day 1.
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
TEG next steps Ian Bird, CERN GDB CERN, 9 th May 2012.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES News on monitoring for CMS distributed computing operations Andrea.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Bird LHCC Referee meeting 23 rd September 2014.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
Evolution of Grid Projects and what that means for WLCG Ian Bird, CERN WLCG Workshop, New York 19 th May 2012.
WLCG Collaboration Workshop 7 – 9 July, Imperial College, London In Collaboration With GridPP Workshop Outline, Registration, Accommodation, Social Events.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
Ian Bird GDB; CERN, 8 th May 2013 March 6, 2013
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
Summary of Data Management Jamboree Ian Bird WLCG Workshop Imperial College 7 th July 2010.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
Information System Status and Evolution Maria Alandes Pradillo, CERN CERN IT Department, Grid Technology Group GDB 13 th June 2012.
LCG CCRC’08 Status WLCG Management Board November 27 th 2007
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
Ian Bird GDB CERN, 9 th September Sept 2015
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
Site Manageability & Monitoring Issues for LCG Ian Bird IT Department, CERN LCG MB 24 th October 2006.
EGI-InSPIRE EGI-InSPIRE RI DDM solutions for disk space resource optimization Fernando H. Barreiro Megino (CERN-IT Experiment Support)
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Julia Andreeva on behalf of the MND section MND review.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Storage Interfaces and Access pre-GDB Wahid Bhimji University of Edinburgh On behalf of all those who participated.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
1 Proposal for a technical discussion group  Because...  We do not have a forum where all of the technical people discuss the critical.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Some interest in common solutions in various areas – Federated data, common analysis framework, dashboards, SAM, clouds, …, etc In 2013 after end of EGI-SA3.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
Wahid Bhimji (Some slides are stolen from Markus Schulz’s presentation to WLCG MB on 19 June Apologies to those who have seen some of this before)
ATLAS Distributed Computing ATLAS session WLCG pre-CHEP Workshop New York May 19-20, 2012 Alexei Klimentov Stephane Jezequel Ikuo Ueda For ATLAS Distributed.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
Evolution of WLCG infrastructure Ian Bird, CERN Overview Board CERN, 30 th September 2011 Accelerating Science and Innovation Accelerating Science and.
CHALONER MIDDLE SCHOOL ELA EVAAS REPORTS and PLANS FOR IMPROVEMENT BY CECILYA WILLIAMS.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES The Common Solutions Strategy of the Experiment Support group.
CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1.
Outcome should be a documented strategy Not everything needs to go back to square one! – Some things work! – Some work has already been (is being) done.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES Evolution of WLCG Data & Storage Management Outcome of Amsterdam.
Computing Fabrics & Networking Technologies Summary Talk Tony Cass usual disclaimers apply! October 2 nd 2010.
LCG Introduction John Gordon, STFC-RAL GDB June 11 th, 2008.
WLCG Operations Coordination report Maria Dimou Andrea Sciabà IT/SDC On behalf of the WLCG Operations Coordination team GDB 12 th November 2014.
Dissemination and User Feedback Castor deployment team Castor Readiness Review – June 2006.
Ian Bird LCG Project Leader Summary of EGI workshop.
Federating Data in the ALICE Experiment
Evolution of storage and data management
DAaM summary Ian Bird MB 29th June 2010.
Final summary Ian Bird Amsterdam, DAaM 18th June 2010.
Project Status Report Computing Resource Review Board Ian Bird
LCG Operations Centres
WLCG Collaboration Workshop;
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
Summary of the dCache workshop
Presentation transcript:

Data management demonstrators Ian Bird; WLCG MB 18 th January 2011

March 2010 – Experiments express concern over Data management and access – First brainstorming … agree on Jamboree June 2010 – Amsterdam Jamboree – ~15 demonstrator projects proposed – Process: Follow up at WLCG meeting – ensure projects had effort and interest Follow up in GDB’s By end of year – decide which would continue based on demonstrated usefulness/feasibility 2 nd half 2010 – Initial follow up at WLCG workshop – GDB status report Jan 2011 – GDB status report (last week) – Close of process started in Amsterdam (today) Reminder

12 projects scheduled at GDB – 2 had no progress reported (CDN + Cassandra/Fuse) – 10 either driven, or interest expressed, by experiments Assume these 10 will progress and be regularly reported on in GDB Scope for collaboration between several – To be encouraged/pushed Several using xrootd technology – Must ensure we arrange adequate support Which (and how) to be wrapped into WLCG sw distributions? Process and initiatives have been very useful – MB endorses continued progress on these 10 projects My Summary of demonstrators

ATLAS PD2P – In use; linked to LST demonstrator – Implementation is ATLAS-specific, but ideas can be re-used with other central task-queues ARC Caching – Use to improve ATLAS use of ARC sites, could also help others use ARC – More general use of cache – needs input from developers (and interest/need from elsewhere) Speed-up of SRM getturl (make sync) – Essentially done, but important mainly if lots of small files (then other measures should be taken too) Cat/SE sync + ACL propagation with MSG – Prototype exists – Interest in testing from ATLAS – Ideas for other uses CHIRP – Seems to work well – use case of personal SE (grid home dir) – Used by ATLAS, tested by CMS Summary … 1

Xrootd-related – Xrootd (EOS, LST) Well advanced, tested by ATLAS and CMS Strategy for Castor evolution at CERN – Xrootd – ATLAS Augments DDM Commonality with CMS – Xrootd-global – CMS Global xrootd federation, integrates with local SE’s and FS’s – Many commonalities – can we converge on a common set of tools?? Proxy-caches in root – Requires validation before production – Continue to study file caching in experiment frameworks NFS4.1 – Lot of progress in implementations and testing – Needs some xrootd support (CMSD) – Should MB push for pNFS kernel in SL? Summary … 2

Amsterdam was not only the demonstrators Was a recognition that the network was a resource – Could use remote access – Should not rely on 100% accuracy of catalogues, etc. – Can use network to access remote services – Network planning group set up – Work with NRENs etc is ongoing (they got our message) Also understood where data management model should change – Separate tape and disk caches (logically at least) – Access to disk caches does not need SRM – SRM for “tape” can be minimal functionality – Disk to be treated as a cache – move away from data placement for analysis at least – Re-think “FTS” – Etc. Post – Amsterdam

We have changed direction There are a number of very active efforts – driven by experiment need/interests for future – Not just what was in the demonstrators Should continue to monitor and support – and look for commonalities: opportunity to reduce duplication and improve support efforts Conclusions