12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.

Slides:



Advertisements
Similar presentations
Object Persistency & Data Handling Session C - Summary Object Persistency & Data Handling Session C - Summary Dirk Duellmann.
Advertisements

Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
ATLAS T1/T2 Name Space Issue with Federated Storage Hironori Ito Brookhaven National Laboratory.
January 11, Csci 2111: Data and File Structures Week1, Lecture 1 Introduction to the Design and Specification of File Structures.
Data Clustering Research in CMS Koen Holtman CERN/CMS Eindhoven University of technology CHEP ’2000 Feb 7-11, 2000.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
6/24/2015B.RamamurthyPage 1 File System B. Ramamurthy.
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
7/15/2015B.RamamurthyPage 1 File System B. Ramamurthy.
MCTS Guide to Microsoft Windows Server 2008 Network Infrastructure Configuration Chapter 7 Configuring File Services in Windows Server 2008.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
January 11, Files – Chapter 1 Introduction to the Design and Specification of File Structures.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
File and Object Replication in Data Grids Chin-Yi Tsai.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
Enabling Grids for E-sciencE Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
 CASTORFS web page - CASTOR web site - FUSE web site -
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
Caitriana Nicholson, CHEP 2006, Mumbai Caitriana Nicholson University of Glasgow Grid Data Management: Simulations of LCG 2008.
11/5/2001WP5 UKHEPGRID1 WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
INFSO-RI Enabling Grids for E-sciencE Introduction Data Management Ron Trompert SARA Grid Tutorial, September 2007.
Andrei Moskalenko Storage team, Centre de Calcul de l’ IN2P3. HPSS – The High Performance Storage System Storage at the Computer Centre of the IN2P3 HEPiX.
02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002.
10 May 2001WP6 Testbed Meeting1 WP5 - Mass Storage Management Jean-Philippe Baud PDP/IT/CERN.
Storage Technologies Grand Plan: 1.Review/revision, Disks, Filesystems, Limitations and Solutions 2.RAID – build server filestore from PC parts 3.Logical.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
CASTOR new stager proposal CASTOR users’ meeting 24/06/2003 The CASTOR team.
The status of IHEP Beijing Site WLCG Asia-Pacific Workshop Yaodong CHENG IHEP, China 01 December 2006.
Dynamic Federation of Grid and Cloud Storage Fabrizio Furano, Oliver Keeble, Laurence Field Speaker: Fabrizio Furano.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
CASTOR: possible evolution into the LHC era
Introduction to Data Management in EGI
SAM at CCIN2P3 configuration issues
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
Universita’ di Torino and INFN – Torino
File System B. Ramamurthy B.Ramamurthy 11/27/2018.
CASTOR: CERN’s data management system
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Using an Object Oriented Database to Store BaBar's Terabytes
Replica Placement Heuristics of Application-level Multicast
Presentation transcript:

12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems

12 Mars 2002LCG Workshop: Disk and File Systems 2 Introduction This talk is based on our actual experience of Mass Storage. Its aim is not to provide a definitive solution for the LHC but to outline the key points we are facing.

12 Mars 2002LCG Workshop: Disk and File Systems 3 Disk and File Systems Disk –Disk Only? –Hierarchical (Disk/Tapes)? File Systems –One UNIX File System? –One Name Space?

12 Mars 2002LCG Workshop: Disk and File Systems 4 Disk versus Hierarchical Choice of hierarchical system at IN2P3 –Cost –Volume availability –May lead to disastrous performances Common points –Sharing with many user hosts (hundreds to thousands) –Sharing with many servers Static or dynamic access to several servers Disk (and tape) drives must cooperate: Fiber Channel solutions look promising The user network shall have increased performances

12 Mars 2002LCG Workshop: Disk and File Systems 5 File System versus Name Space System File System –Solutions based on a unique File System can’t be imagined for Pbytes volumes (recovery, performances….) –File System can be only a simulated file system Name space system –The adressability is at file level –The access must be as transparent as possible for the user applications Many solutions exist in the HEP community

12 Mars 2002LCG Workshop: Disk and File Systems 6 IN2P3 today’s solution HSM solution based on HPSS –Disk and Tape Hierarchy BABAR Objectivity: 65 TB / 20 TB disk out of HPSS Others: 45 TB / 1.7 TB disk Total: 110 TB –One File-Name Space with RFIO Access –Developed RFIO 64 bits with CERN Function very close to CASTOR

12 Mars 2002LCG Workshop: Disk and File Systems 7 Which experiments? BABAR –Objectivity: 65 TB / 20 TB on disk –Other (of which analysis and user space): 1.5 TB Astrophysics –EROS: 9 TB –AUGER: 16.5 TB LHC –CMS (2TB), ATLAS (1.8TB), LHCB (1.6TB), ALICE (1.3TB) Other –D0 (5.7 TB), VIRGO (0.9 TB)…

12 Mars 2002LCG Workshop: Disk and File Systems 8 Performances Dynamic data path between user host and the right data server –Best achieved by RFIO/HPSS readlist/writelist (10 MB/s) –RFIO streaming mode has good results (5 to 8 MB/s) –The basic read/write performances are poor (1 to 2 MB/s) Several RFIO servers bound to an HPSS disk server –Static server name resolution at the moment

12 Mars 2002LCG Workshop: Disk and File Systems 9 Miscellaneous topics Access rights –The UNIX-style permissions are inadequate Identity –All is based on uid/gid. This seems difficult to change. Quality of Service –It’s achieved thru the COS (Class of Service) for HPSS. It differs from other implementations based on directory tree. Statistics –The statistics provided by HPSS and RFIO are insufficient

12 Mars 2002LCG Workshop: Disk and File Systems 10 User view of the Mass Storage Have a user data base (or book keeping) –Cost of search operations High for searching in large tree directory Inadequate/prohibitive for seeking in files –Associate file names with specific physics-significant fields and management fields. –The Mass Storage is used only as a data store Transparency for applications –Source is not always available, RFIO API is not so simple –We are developing a transparent access thru BYPASS (WYSCONSIN University)

12 Mars 2002LCG Workshop: Disk and File Systems 11 Conclusion Announced volumes require –Cooperation of data servers or Fiber Channel drives –Name Server support Use of several Mass Storage in HEP –Don’t provide too imbedded solution (physics/mass storage) –Promote user usage of data book keeping Documentation – –