CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.

Slides:



Advertisements
Similar presentations
Object Persistency & Data Handling Session C - Summary Object Persistency & Data Handling Session C - Summary Dirk Duellmann.
Advertisements

Database Architectures and the Web
Chapter 7 LAN Operating Systems LAN Software Software Compatibility Network Operating System (NOP) Architecture NOP Functions NOP Trends.
16/9/2004Features of the new CASTOR1 Alice offline week, 16/9/2004 Olof Bärring, CERN.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Case Study: Windows 2000 Part I Will Richards CPSC 550 Spring 2001.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Components and Architecture CS 543 – Data Warehousing.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
16/4/2004Storage Resource Sharing with CASTOR1 Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
The LCG File Catalog (LFC) Jean-Philippe Baud – Sophie Lemaitre IT-GD, CERN May 2005.
Technology Overview. Agenda What’s New and Better in Windows Server 2003? Why Upgrade to Windows Server 2003 ?  From Windows NT 4.0  From Windows 2000.
M i SMob i S Mob i Store - Mobile i nternet File Storage Platform Chetna Kaur.
7/2/2003Supervision & Monitoring section1 Supervision & Monitoring Organization and work plan Olof Bärring.
The GSI Mass Storage System TAB GridKa, FZ Karlsruhe Sep. 4, 2002 Horst Göringer, GSI Darmstadt
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
The LCG File Catalog (LFC) Jean-Philippe Baud – Sophie Lemaitre IT-GD, CERN May 2005.
1 Introduction to Microsoft Windows 2000 Windows 2000 Overview Windows 2000 Architecture Overview Windows 2000 Directory Services Overview Logging On to.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
Functional description Detailed view of the system Status and features Castor Readiness Review – June 2006 Giuseppe Lo Presti, Olof Bärring CERN / IT.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
 CASTORFS web page - CASTOR web site - FUSE web site -
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
CLASS Information Management Presented at NOAATECH Conference 2006 Presented by Pat Schafer (CLASS-WV Development Lead)
INFORMATION SYSTEM-SOFTWARE Topic: OPERATING SYSTEM CONCEPTS.
11/5/2001WP5 UKHEPGRID1 WP5 Mass Storage UK HEPGrid UCL 11th May Tim Folkes, RAL
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CASTOR status Presentation to LCG PEB 09/11/2004 Olof Bärring, CERN-IT.
Chapter 16 File Management The Architecture of Computer Hardware and Systems Software: An Information Technology Approach 3rd Edition, Irv Englander John.
CERN SRM Development Benjamin Coutourier Shaun de Witt CHEP06 - Mumbai.
NOVA A Networked Object-Based EnVironment for Analysis “Framework Components for Distributed Computing” Pavel Nevski, Sasha Vanyashin, Torre Wenaus US.
Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.
Distributed Logging Facility Castor External Operation Workshop, CERN, November 14th 2006 Dennis Waldron CERN / IT.
10 May 2001WP6 Testbed Meeting1 WP5 - Mass Storage Management Jean-Philippe Baud PDP/IT/CERN.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
CASTOR in SC Operational aspects Vladimír Bahyl CERN IT-FIO 3 2.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
© 2012 IBM Corporation IBM Linear Tape File System (LTFS) Overview and Demo.
CASTOR new stager proposal CASTOR users’ meeting 24/06/2003 The CASTOR team.
CommVault Architecture
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CERN IT-Storage Strategy Outlook Alberto Pace, Luca Mascetti, Julien Leduc
CASTOR: possible evolution into the LHC era
Jean-Philippe Baud, IT-GD, CERN November 2007
Chapter 12: File System Implementation
Emil Knezo PPARC-LCG-Fellow CERN IT-DS-HSM August 2002
Introduction to Data Management in EGI
File System Implementation
PowerMart of Informatica
Ákos Frohner EGEE'08 September 2008
Introduction to J2EE Architecture
CTA: CERN Tape Archive Overview and architecture
Research Data Archive - technology
Storage Virtualization
OffLine Physics Computing
Chapter 1: Networking with Microsoft Windows 2000 Server
CASTOR: CERN’s data management system
Chapter 16 File Management
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999

CASTOR  CASTOR stands for “CERN Advanced Storage Manager”  Short term goal: handle NA48 and COMPASS data in a fully distributed environment  Long term goal: handle LHC data

October CASTOR objectives (1)  High performance  Good scalability  High modularity to be able to easily replace components and integrate commercial products  Provide HSM functionality as well as traditional tape staging  Focussed on HEP requirements

October CASTOR objectives (2)  Available on all Unix and NT platforms  Support for most SCSI tape drives and robotics  Easy to clone and deploy  Easy integration of new technologies: importance of a simple, modular and clean architecture  Backward compatible with the existing SHIFT software

October CASTOR layout: tape access STAGE Client STAGER RFIOD TPDAEMON MSGD DISK POOL TMS RFIO Client NAME server VOLUME allocator RTCOPY VDQM server MOVER

October Main components  Cthread and Cpool to hide differences between thread implementations  CASTOR Database (Cdb) developed to replace the stager catalog its use could be extended to other components like the name server  Volume and Drive Queue Manager central but replicated server to optimize load balancing and number of tape mounts  Ctape rewritten to use VDQM and provide an API

October Remote Tape Copy  Calls Ctape API to mount and position tapes  Transfers data between disk server and tape drive  Uses large memory buffers to cache data while mounting tape  Overlays network and tape I/O using threads and circular buffers  Overlays network I/O from several input files if enough memory is available  Memory buffer per mover = 80 % of the tape server physical memory divided by the number of tape drives

October Basic Hierarchical Storage Manager (HSM)  Automatic tape volume allocation  Explicit migration/recall by user  Automatic migration by disk pool manager  A Name Server is being implemented 3 Database products will be tested: Cdb, Raima and Oracle

October Name server  Implement an ufs-like hierarchical view of the name space: files and directories with permissions and access times  API client interface which provides standard Posix calls like mkdir, creat, chown, open, unlink, readdir  This metadata information is also stored as “user” labels together with the data on tape  It can also be exported according to the C21 standard for interchange with non CASTOR systems

October Volume Allocator  Determine the most appropriate tapes for storing files according to file size expected data transfer rate drive access time media cost...  Handle pool of tapes private to an experiment public pool  Other supported features: allow file spanning over several volumes minimize the number of tape volumes for a given file

October Current status  Cthread/Cpool: ready  Cdb: being rewritten for optimization  Rtcopy: Interface to client, network and tape API ready  Tape: client API ready, server being completed  VDQM: ready  Name server: being tested using Raima  Volume allocator: design phase  Stager: interfaced to Cdb, better robustness, API to be provided  RFIO upgrade: multi-threaded server, 64 bits support, thread safe client

October Deployment  All developments mentioned above will be gradually put in production over the autumn: new stager catalog for selected experiments tape servers basic HSM functionality modified stager name server tape volume allocator

October Increase functionality (Phase 2)  The planned developments are: GUI and WEB interface to monitor and administer CASTOR Enhanced HSM functionality: Transparent migration Intelligent disk space allocation Classes of service Automatic migration between media types Quotas Undelete and Repack functions Import/Export  These developments must be prioritized and the design and coding would start in year 2000  Collaboration with IN2P3 on these developments has started (RFIO 64 bits)  CDF experiment at Fermilab is interested