Download presentation
Presentation is loading. Please wait.
Published byBertina Roberts Modified over 8 years ago
1
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999
2
CASTOR/HEPIX’99@SLAC CASTOR CASTOR stands for “CERN Advanced Storage Manager” Short term goal: handle NA48 and COMPASS data in a fully distributed environment Long term goal: handle LHC data
3
October 1999CASTOR/HEPIX’99@SLAC CASTOR objectives (1) High performance Good scalability High modularity to be able to easily replace components and integrate commercial products Provide HSM functionality as well as traditional tape staging Focussed on HEP requirements
4
October 1999CASTOR/HEPIX’99@SLAC CASTOR objectives (2) Available on all Unix and NT platforms Support for most SCSI tape drives and robotics Easy to clone and deploy Easy integration of new technologies: importance of a simple, modular and clean architecture Backward compatible with the existing SHIFT software
5
October 1999CASTOR/HEPIX’99@SLAC CASTOR layout: tape access STAGE Client STAGER RFIOD TPDAEMON MSGD DISK POOL TMS RFIO Client NAME server VOLUME allocator RTCOPY VDQM server MOVER
6
October 1999CASTOR/HEPIX’99@SLAC Main components Cthread and Cpool to hide differences between thread implementations CASTOR Database (Cdb) developed to replace the stager catalog its use could be extended to other components like the name server Volume and Drive Queue Manager central but replicated server to optimize load balancing and number of tape mounts Ctape rewritten to use VDQM and provide an API
7
October 1999CASTOR/HEPIX’99@SLAC Remote Tape Copy Calls Ctape API to mount and position tapes Transfers data between disk server and tape drive Uses large memory buffers to cache data while mounting tape Overlays network and tape I/O using threads and circular buffers Overlays network I/O from several input files if enough memory is available Memory buffer per mover = 80 % of the tape server physical memory divided by the number of tape drives
8
October 1999CASTOR/HEPIX’99@SLAC Basic Hierarchical Storage Manager (HSM) Automatic tape volume allocation Explicit migration/recall by user Automatic migration by disk pool manager A Name Server is being implemented 3 Database products will be tested: Cdb, Raima and Oracle
9
October 1999CASTOR/HEPIX’99@SLAC Name server Implement an ufs-like hierarchical view of the name space: files and directories with permissions and access times API client interface which provides standard Posix calls like mkdir, creat, chown, open, unlink, readdir This metadata information is also stored as “user” labels together with the data on tape It can also be exported according to the C21 standard for interchange with non CASTOR systems
10
October 1999CASTOR/HEPIX’99@SLAC Volume Allocator Determine the most appropriate tapes for storing files according to file size expected data transfer rate drive access time media cost... Handle pool of tapes private to an experiment public pool Other supported features: allow file spanning over several volumes minimize the number of tape volumes for a given file
11
October 1999CASTOR/HEPIX’99@SLAC Current status Cthread/Cpool: ready Cdb: being rewritten for optimization Rtcopy: Interface to client, network and tape API ready Tape: client API ready, server being completed VDQM: ready Name server: being tested using Raima Volume allocator: design phase Stager: interfaced to Cdb, better robustness, API to be provided RFIO upgrade: multi-threaded server, 64 bits support, thread safe client
12
October 1999CASTOR/HEPIX’99@SLAC Deployment All developments mentioned above will be gradually put in production over the autumn: new stager catalog for selected experiments tape servers basic HSM functionality modified stager name server tape volume allocator
13
October 1999CASTOR/HEPIX’99@SLAC Increase functionality (Phase 2) The planned developments are: GUI and WEB interface to monitor and administer CASTOR Enhanced HSM functionality: Transparent migration Intelligent disk space allocation Classes of service Automatic migration between media types Quotas Undelete and Repack functions Import/Export These developments must be prioritized and the design and coding would start in year 2000 Collaboration with IN2P3 on these developments has started (RFIO 64 bits) CDF experiment at Fermilab is interested
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.