CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.

Slides:



Advertisements
Similar presentations
Object Persistency & Data Handling Session C - Summary Object Persistency & Data Handling Session C - Summary Dirk Duellmann.
Advertisements

IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
16/9/2004Features of the new CASTOR1 Alice offline week, 16/9/2004 Olof Bärring, CERN.
Network-Attached Storage
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Objectivity Data Migration Marcin Nowak, CERN Database Group, CHEP 2003 March , La Jolla, California.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Frangipani: A Scalable Distributed File System C. A. Thekkath, T. Mann, and E. K. Lee Systems Research Center Digital Equipment Corporation.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
16/4/2004Storage Resource Sharing with CASTOR1 Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CERN IT Department CH-1211 Genève 23 Switzerland t Tape-dev update Castor F2F meeting, 14/10/09 Nicola Bessone, German Cancio, Steven Murray,
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
 CASTORFS web page - CASTOR web site - FUSE web site -
4-8 th October 1999CERN Site Report, HEPiX SLAC. A.Silverman CERN Site Report HEPNT/HEPiX October 1999 SLAC Alan Silverman CERN/IT/DIS.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Andrei Moskalenko Storage team, Centre de Calcul de l’ IN2P3. HPSS – The High Performance Storage System Storage at the Computer Centre of the IN2P3 HEPiX.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
CERN IT Department CH-1211 Genève 23 Switzerland t HEPiX Conference, ASGC, Taiwan, Oct 20-24, 2008 The CASTOR SRM2 Interface Status and plans.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Distributed Logging Facility Castor External Operation Workshop, CERN, November 14th 2006 Dennis Waldron CERN / IT.
CASTOR CNAF TIER1 SITE REPORT Geneve CERN June 2005 Ricci Pier Paolo
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Operation of the CERN Managed Storage environment; current status and future directions CHEP 2004 / Interlaken Data Services team: Vladimír Bahyl, Hugo.
Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
Virtual Server Server Self Service Center (S3C) JI July.
Storage & Database Team Activity Report INFN CNAF,
CASTOR new stager proposal CASTOR users’ meeting 24/06/2003 The CASTOR team.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CERN IT-Storage Strategy Outlook Alberto Pace, Luca Mascetti, Julien Leduc
CASTOR: possible evolution into the LHC era
NL Service Challenge Plans
Giuseppe Lo Re Workshop Storage INFN 20/03/2006 – CNAF (Bologna)
Tape Operations Vladimír Bahyl on behalf of IT-DSS-TAB
PC Farms & Central Data Recording
Service Challenge 3 CERN
Emil Knezo PPARC-LCG-Fellow CERN IT-DS-HSM August 2002
SAM at CCIN2P3 configuration issues
CERN Lustre Evaluation and Storage Outlook
CERN-Russia Collaboration in CASTOR Development
Ákos Frohner EGEE'08 September 2008
OffLine Physics Computing
CASTOR: CERN’s data management system
Presentation transcript:

CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000

CASTOR project status/CHEP2000 Agenda  CASTOR objectives  CASTOR components  Current status  Early tests  Possible enhancements  Conclusion

February 2000CASTOR project status/CHEP2000 CASTOR  CASTOR stands for “CERN Advanced Storage Manager”  Evolution of SHIFT  Short term goal: handle NA48 data (25 MB/s) and COMPASS data (35 MB/s) in a fully distributed environment  Long term goal: prototype for the software to be used to handle LHC data  Development started in January 1999  CASTOR being put in production at CERN  See:

February 2000CASTOR project status/CHEP2000 CASTOR objectives  CASTOR is a disk pool manager coupled with a backend store which provides: Indirect access to tapes HSM functionality  Major objectives are: High performance Good scalability Easy to clone and deploy High modularity to be able to easily replace components and integrate commercial products  Focussed on HEP requirements  Available on most Unix systems and Windows/NT

February 2000CASTOR project status/CHEP2000 CASTOR components  Client applications use the stager and RFIO  The backend store consists of: RFIOD (Disk Mover) Name server Volume Manager Volume and Drive Queue Manager RTCOPY daemon + RTCPD (Tape Mover) Tpdaemon (PVR)  Main characteristics of the servers Distributed Critical servers are replicated Use CASTOR Database (Cdb) or commercial databases like Raima and Oracle

February 2000CASTOR project status/CHEP2000 CASTOR layout STAGER RFIOD (DISK MOVER) TPDAEMON (PVR) MSGD DISK POOL TMS NAME server VOLUME manager RTCOPY VDQM server RTCPD (TAPE MOVER)

February 2000CASTOR project status/CHEP2000 Basic Hierarchical Storage Manager (HSM)  Automatic tape volume allocation  Explicit migration/recall by user  Automatic migration by disk pool manager

February 2000CASTOR project status/CHEP2000 Current status  Development complete  New stager with Cdb in production for DELPHI  Mover and HSM being extensively tested

February 2000CASTOR project status/CHEP2000 Early tests  RTCOPY  Name Server  ALICE Data Challenge

February 2000CASTOR project status/CHEP2000 Hardware configuration for RTCOPY tests (1) SUN E450 SCSI disks (striped FS), ~30MB/s Linux PCs STK Redwood IBM 3590E STK 9840

February 2000CASTOR project status/CHEP2000 RTCOPY test results (1)

February 2000CASTOR project status/CHEP2000 Hardware configuration for RTCOPY tests (2) SUN E450 Linux PC SCSI disks (striped FS), ~30MB/s EIDE disks, ~14MB/s Linux PCs STK Redwood STK 9840 Gigabit Linux PCs EIDE 100BaseT

February 2000CASTOR project status/CHEP2000 RTCOPY test results (2)  A short (1/2 hour) scalability test was run in a distributed environment: 5 disk servers 3 tape servers 9 drives  120 GB transferred  70 MB/s aggregate (if mount time overhead included)  90 MB/s aggregate (if mount time overhead excluded)  This exceeds COMPASS requirements and is just below the ATLAS/CMS requirements

February 2000CASTOR project status/CHEP2000 Name server test results (1)

February 2000CASTOR project status/CHEP2000 Name server test results (2)

February 2000CASTOR project status/CHEP2000 ALICE Data Challenge 10 * PowerPC MHz 32MB7 * PowerPC MHz 32MBHP Kayak 3COM Fast Ethernet Switch 12 * Redwoods 4 * Linux tape servers 12 * Linux disk servers Gigabit Switch Smart Switch Router

February 2000CASTOR project status/CHEP2000 Possible enhancements  RFIO client - name server interface  64 bits support in RFIO (collaboration with IN2P3)  GUI and WEB interface to monitor and administer CASTOR  Enhanced HSM functionality: Transparent migration Intelligent disk space allocation Classes of service Automatic migration between media types Quotas Undelete and Repack functions Import/Export

February 2000CASTOR project status/CHEP2000 Conclusion  2 man years of design and development  Easy deployment because of modularity and backward compatibility with SHIFT  Performance limited only by hardware configuration  See: