Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.

Slides:



Advertisements
Similar presentations
HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
Advertisements

Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Summary of issues and questions raised. FTS workshop for experiment integrators Summary of use  Generally positive response on current state!  Now the.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
INFSO-RI Enabling Grids for E-sciencE DAGs with data placement nodes: the “shish-kebab” jobs Francesco Prelz Enzo Martelli INFN.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Dan Tovey, University of Sheffield User Board Overview Dan Tovey University Of Sheffield.
Baseline Services Group Status Madison OSG Technology Roadmap Baseline Services 4 th May 2005 Based on Ian Bird’s presentation at Taipei Markus Schulz.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
U.S. ATLAS Computing Facilities Bruce G. Gibbard GDB Meeting 16 March 2005.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Baseline Services Group Report LHCC Referees Meeting 27 th June 2005 Ian Bird IT/GD, CERN.
Baseline Services Group GDB CERN 18 th May 2005 Ian Bird IT/GD, CERN.
Handling of T1D0 in CCRC’08 Tier-0 data handling Tier-1 data handling Experiment data handling Reprocessing Recalling files from tape Tier-0 data handling,
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
INFSO-RI Enabling Grids for E-sciencE VO Box Meeting: Summary & Observations C. Loomis (LAL-Orsay) Grid Deployment Board Meeting.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
Grid Deployment Technical Working Groups: Middleware selection AAA,security Resource scheduling Operations User Support GDB Grid Deployment Resource planning,
INFSO-RI Enabling Grids for E-sciencE File Transfer Service Patricia Mendez Lorenzo CERN (IT-GD) / CNAF Tier 2 INFN - SC3 Meeting.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
ATLAS DDM Developing a Data Management System for the ATLAS Experiment September 20, 2005 Miguel Branco
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
DIRAC: Workload Management System Garonne Vincent, Tsaregorodtsev Andrei, Centre de Physique des Particules de Marseille Stockes-rees Ian, University of.
WLCG IPv6 deployment strategy
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
EGEE Middleware Activities Overview
ATLAS Use and Experience of FTS
Status of the SRM 2.2 MoU extension
Baseline Services Group Report
POW MND section.
Data Challenge with the Grid in ATLAS
CREAM Status and Plans Massimo Sgaravatto – INFN Padova
CMS — Service Challenge 3 Requirements and Objectives
BNL FTS services Hironori Ito.
FTS Monitoring Ricardo Rocha
SRM Developers' Response to Enhancement Requests
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Artem Trunov and EKP team EPK – Uni Karlsruhe
Artem Trunov, Günter Quast EKP – Uni Karlsruhe
OGSA Data Architecture Scenarios
LCG middleware and LHC experiments ARDA project
WLCG Collaboration Workshop;
Data Management cluster summary
Status and Future Steps
Summary of Service Challenges
LHC Data Analysis using a worldwide computing grid
INFNGRID Workshop – Bari, Italy, October 2004
The LHCb Computing Data Challenge DC06
Presentation transcript:

Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD

LCG Baseline Services Working Group 2 Reliable File Transfer  James Casey presented the thinking behind and status of the reliable file transfer service (in gLite)  Interface proposed is that of the gLite FTS  Agree that this seems a reasonable starting point  James has discussed with each of the experiment reps on details and how this might be used  Bring discussion to Storage Management workshop this week  Members of sub-group  ALICE: Latchezar Betev  ATLAS: Miguel Branco  CMS: Lassi Tuura  LHCb: Andrei Tsaregorodtsev  LCG: James Casey fts: generic file transfer service FTS: gLite implementation

LCG Baseline Services Working Group 3 File transfer – experiment views Propose gLite FTS as proto-interface for a file transfer service: (see note drafted by the sub-group)  CMS:  Currently PhedEx used to transfer to CMS sites (inc Tier2), satisfies CMS needs for production and data challenge  Highest priority is to have lowest layer (gridftp, SRM), and other local infrastructure available and production quality. Remaining errors handled by PhedEx  Work on reliable fts should not detract from this, but integrating as service under PhedEx is not a considerable effort  ATLAS:  DQ implements a fts similar to this (gLite) and works across 3 grid flavours  Accept current gLite FTS interface (with current FIFO request queue). Willing to test prior to July.  Interface – DQ feed requests into FTS queue.  If these tests OK, would want to integrate experiment catalog interactions into the FTS

LCG Baseline Services Working Group 4 FTS summary – cont.  LHCb:  Have service with similar architecture, but with request stores at every site  Would integrate with FTS by writing agents for VO specific actions (eg catalog), need VO agents at all sites  Central request store OK for now, having them at Tier 1s would allow scaling  Like to use in Sept for data created in challenge, would like resources in May(?) for integration and creation of agents  ALICE:  See fts layer as service that underlies data placement. Have used aiod for this in DC04.  Expect gLite FTS to be tested with other data management service in SC3 – ALICE will participate.  Expect implementation to allow for experiment-specific choices of higher level components like file catalogues

LCG Baseline Services Working Group 5 File transfer service - summary  Require base storage and transfer infrastructure (gridftp, SRM) to become available at high priority and demonstrate sufficient quality of service  All see value in more reliable transfer layer in longer term (relevance between 2 srms?)  But this could be srmCopy  As described the gLite FTS seems to satisfy current requirements and integrating would require modest effort  Experiments differ on urgency of fts due to differences in their current systems  Interaction with fts (e.g catalog access) – either in the experiment layer or integrating into FTS workflow  Regardless of transfer system deployed – need for experiment- specific components to run at both Tier1 and Tier2  Without a general service, inter-VO scheduling, bandwidth allocation, prioritisation, rapid address of security issues etc. would be difficult

LCG Baseline Services Working Group 6 fts – open issues  Interoperability with other fts’  interfaces  srmCopy vs file transfer service  Backup plan and timescale for component acceptance?  Timescale for decision for SC3 – end April  All experiments currently have an implementation  How to send a file to multiple destinations?  What agents are provided by default, as production agents, or as stubs for expts to extend?  From Jon Bakken:  What are the anticipated bottlenecks for local internode file transfers needed to make FTS work at high rate?  How can global sharing of data pools be implemented in the FTS design?  How does FTS scale for writing into the pools if the tape backend stalls, especially given the local file locations on each node.  How does pinning work in this local model?  How will the DNS certificate issues be addressed?  How will multiple files in 1 srmcp be addressed?