11/5/2001WP5 UKHEPGRID1 WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL

Slides:



Advertisements
Similar presentations
30-31 Jan 2003J G Jensen, RAL/WP5 Storage Elephant Grid Access to Mass Storage.
Advertisements

Fabric and Storage Management GridPP Fabric and Storage Management GridPP 24/24 May 2001.
J Jensen CCLRC RAL Data Management AUZN (mostly about SRM though) GGF 16, Athens J Jensen.
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
LHCb(UK) Meeting Glenn Patrick1 LHCb Grid Activities in UK LHCb(UK) Meeting Cambridge, 10th January 2001 Glenn Patrick (RAL)
Jens G Jensen CCLRC/RAL hepsysman 2005Storage Middleware SRM 2.1 issues hepsysman Oxford 5 Dec 2005.
Data Management Expert Panel. RLS Globus-EDG Replica Location Service u Joint Design in the form of the Giggle architecture u Reference Implementation.
NIKHEF Testbed 1 Plans for the coming three months.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
Applying Data Grids to Support Distributed Data Management Storage Resource Broker Reagan W. Moore Ian Fisk Bing Zhu University of California, San Diego.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Web-based Portal for Discovery, Retrieval and Visualization of Earth Science Datasets in Grid Environment Zhenping (Jane) Liu.
GridPP9 – 5 February 2004 – Data Management DataGrid is a project funded by the European Union GridPP is funded by PPARC WP2+5: Data and Storage Management.
NAREGI WP4 (Data Grid Environment) Hideo Matsuda Osaka University.
WP-8, ZIB WP-8: Data Handling And Visualization Review Meeting Report Felix Hupfeld, Andrei Hutanu, Andre Merzky, Thorsten Schütt, Brygg Ullmer Zuse-Institute-Berlin.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
April 2001HEPix/HEPNT1 RAL Site Report John Gordon CLRC, UK.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
LHCb Applications and GRID Integration Domenico Galli Catania, April 9, st INFN-GRID Workshop.
2nd April 2001Tim Adye1 Bulk Data Transfer Tools Tim Adye BaBar / Rutherford Appleton Laboratory UK HEP System Managers’ Meeting 2 nd April 2001.
Computational grids and grids projects DSS,
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
1 Use of SRMs in Earth System Grid Arie Shoshani Alex Sim Lawrence Berkeley National Laboratory.
Nick Brook Current status Future Collaboration Plans Future UK plans.
File and Object Replication in Data Grids Chin-Yi Tsai.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CGW 04, Stripped replication for the grid environment as a web service1 Stripped replication for the Grid environment as a web service Marek Ciglan, Ondrej.
SRM & SE Jens G Jensen WP5 ATF, December Collaborators Rutherford Appleton (ATLAS datastore) CERN (CASTOR) Fermilab Jefferson Lab Lawrence Berkeley.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Replica Management Kelly Clynes. Agenda Grid Computing Globus Toolkit What is Replica Management Replica Management in Globus Replica Management Catalog.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
10 May 2001WP6 Testbed Meeting1 WP5 - Mass Storage Management Jean-Philippe Baud PDP/IT/CERN.
Data Management The European DataGrid Project Team
Author - Title- Date - n° 1 Partner Logo WP5 Status John Gordon Budapest September 2002.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Globus Data Storage Interface (DSI) - Enabling Easy Access to Grid Datasets Raj Kettimuthu, ANL and U. Chicago DIALOGUE Workshop August 2, 2005.
Protocols and Services for Distributed Data- Intensive Science Bill Allcock, ANL ACAT Conference 19 Oct 2000 Fermi National Accelerator Laboratory Contributors:
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
New Development Efforts in GridFTP Raj Kettimuthu Math & Computer Science Division, Argonne National Laboratory, Argonne, IL 60439, U.S.A.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
GridPP2 Data Management work area J Jensen / RAL GridPP2 Data Management Work Area – Part 2 Mass storage & local storage mgmt J Jensen
Bob Jones – Project Architecture - 1 March n° 1 Project Architecture, Middleware and Delivery Schedule Bob Jones Technical Coordinator, WP12, CERN.
Storage Element Security Jens G Jensen, WP5 Barcelona, May 2003.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
9/20/04Storage Resource Manager, Timur Perelmutov, Jon Bakken, Don Petravick, Fermilab 1 Storage Resource Manager Timur Perelmutov Jon Bakken Don Petravick.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
CASTOR: possible evolution into the LHC era
WP18, High-speed data recording Krzysztof Wrona, European XFEL
Status of the SRM 2.2 MoU extension
Moving the LHCb Monte Carlo production system to the GRID
John Gordon EDG Conference Barcelona, May 2003
SRM Developers' Response to Enhancement Requests
Fabric and Storage Management
The INFN Tier-1 Storage Implementation
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
OGF19 – Chapel Hill, NC, USA 30 January 2007
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

11/5/2001WP5 UKHEPGRID1 WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL

11/5/2001WP5 UKHEPGRID2 Tasks Review and evaluate current technologies A common API to heterogeneous MSS Tape exchange, including metadata Metadata publishing

11/5/2001WP5 UKHEPGRID3 Common API Defines an API which can be used by Grid middle ware to interface to MSS Side effect of making user programmes portable as well Original scheme has changed due to ATF activity

11/5/2001WP5 UKHEPGRID4 Pre ATF Mass storage was treated as local Just tape storage No need to handle grid proxies etc WP2 datamover, replication manager etc would handle this We would define an API like RFIO and testbeds would implement locally

11/5/2001WP5 UKHEPGRID5 ATF Concept of Grid storage Defined a StorageElement (SE) that includes direct access from the Grid Access to disk and tape (i.e. all storage, and the management of the disk space)

11/5/2001WP5 UKHEPGRID6 ATF 3 interfaces defined –put/get –open/read –management Move from files to objects Required a rethink, need software not just API

11/5/2001WP5 UKHEPGRID7 ATF - What to do? Evaluate Castor –Stripped down version for disk management –At RAL for use on datastore GridFTP for data transfer GridFTP server as SE

11/5/2001WP5 UKHEPGRID8 GridFTP Globus have reworked their I/O plans GridFTP basis of future data movement –tuneable for network performance –parallel streams –third party transfers, partial file transfer –file and stream interfaces RAL and CERN have tested alpha code –but just for transfer

11/5/2001WP5 UKHEPGRID9 GridFTP server as SE Looks feasible given GASS experience to implement SE using GridFTP server Uses globus infrastructure Handles GSI proxies Gives trivial access to local disk and any HSM with Unix filestore interface Plan to produce prototype for M9 for Unix filesystem and Castor

11/5/2001WP5 UKHEPGRID10 Task 2 If networks don’t deliver, may have to move data by tape between CERN and Regional centres May be easier to take tape out of robot and move, rather than copy ANSI standard that covers this Will investigate this and implement if suitable

11/5/2001WP5 UKHEPGRID11 Task 3 Metadata Provide metadata about SE and its contents, not about data itself Require somewhere to publish static information Require support for acting as active publisher of dynamic metadata Still require input from other WP’s on what information they require

11/5/2001WP5 UKHEPGRID12 Deliverables WP5 will deliver software for - –data access –metadata production Provide SE interface based on GrifFTP server –Support for castor and Unix filesystem and access by ReplicaManager at M9

11/5/2001WP5 UKHEPGRID13 Future Developments User API to allow direct user access to remote data SE on other HSMs Disk housekeeping as part of SE Management interface to SE (create, stage, reserve, pinning….)