Storage & Database Team Activity Report INFN CNAF, 22-12-2004.

Slides:



Advertisements
Similar presentations
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Distributed Tier1 scenarios G. Donvito INFN-BARI.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Objectivity Data Migration Marcin Nowak, CERN Database Group, CHEP 2003 March , La Jolla, California.
Castor F2F Meeting Barbara Martelli Castor Database CNAF.
16/4/2004Storage Resource Sharing with CASTOR1 Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov.
INFN Tier1 Status report Spring HEPiX 2005 Andrea Chierici – INFN CNAF.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Module 1: Installing and Configuring Servers. Module Overview Installing Windows Server 2008 Managing Server Roles and Features Overview of the Server.
Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.
Operation of CASTOR at RAL Tier1 Review November 2007 Bonny Strong.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Daniela Anzellotti Alessandro De Salvo Barbara Martelli Lorenzo Rinaldi.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
4/5/2007Data handling and transfer in the LHCb experiment1 Data handling and transfer in the LHCb experiment RT NPSS Real Time 2007 FNAL - 4 th May 2007.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
INFSO-RI Enabling Grids for E-sciencE SA1 and gLite: Test, Certification and Pre-production Nick Thackray SA1, CERN.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Light weight Disk Pool Manager experience and future plans Jean-Philippe Baud, IT-GD, CERN September 2005.
Report from CASTOR external operations F2F meeting held at RAL in February Barbara Martelli INFN - CNAF.
CERN - IT Department CH-1211 Genève 23 Switzerland t Oracle Real Application Clusters (RAC) Techniques for implementing & running robust.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
LCG 3D Project Update (given to LCG MB this Monday) Dirk Duellmann CERN IT/PSS and 3D
RAL Database services Oracle Oracle Oracle / / (SE) Oracle MySQL 5.0 MySQL 5.5 MySQL 5.6 (in test) ElasticSearch.
CASTOR CNAF TIER1 SITE REPORT Geneve CERN June 2005 Ricci Pier Paolo
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Operation of the CERN Managed Storage environment; current status and future directions CHEP 2004 / Interlaken Data Services team: Vladimír Bahyl, Hugo.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
Database CNAF Barbara Martelli Rome, April 4 st 2006.
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
Status of tests in the LCG 3D database testbed Eva Dafonte Pérez LCG Database Deployment and Persistency Workshop.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
CASTOR in SC Operational aspects Vladimír Bahyl CERN IT-FIO 3 2.
Replicazione e QoS nella gestione di database grid-oriented Barbara Martelli INFN - CNAF.
Dissemination and User Feedback Castor deployment team Castor Readiness Review – June 2006.
Validation tests of CNAF storage infrastructure Luca dell’Agnello INFN-CNAF.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
CERN IT-Storage Strategy Outlook Alberto Pace, Luca Mascetti, Julien Leduc
status, usage and perspectives
Real Time Fake Analysis at PIC
Dirk Duellmann CERN IT/PSS and 3D
LCG Service Challenge: Planning and Milestones
Status of the SRM 2.2 MoU extension
Status and plans Giuseppe Lo Re INFN-CNAF 8/05/2007.
IT-DB Physics Services Planning for LHC start-up
LCG 3D Distributed Deployment of Databases
Database Services at CERN Status Update
Service Challenge 3 CERN
Database Readiness Workshop Intro & Goals
Experiences and Outlook Data Preservation and Long Term Analysis
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Workshop Summary Dirk Duellmann.
The INFN Tier-1 Storage Implementation
CC-IN2P3 Pierre-Emmanuel Brinette IN2P3-CC Storage Team
Data Management cluster summary
Storage resources management and access at TIER1 CNAF
CASTOR: CERN’s data management system
Presentation transcript:

Storage & Database Team Activity Report INFN CNAF,

Storage and Database Team Team composition - P.P. Ricci, G. Lore, E. Vilucchi, S. Zani, B. Martelli Tasks –CASTOR HSM system - HW/SW installation and maintenance, gridftp and SRM access service P.P. Ricci, G. Lore, E. Vilucchi + 1 person at CERN dev-team (February 2004) Total: ~1 FTE at CNAF, needed at least 2 –DISK (SAN,NAS) - HW/SW installation and maintenance, remote (gridSE) and local (rfiod/nfs/GPFS) access service, clustered/parallel filesystem tests P.P. Ricci, G. Lore, E. Vilucchi, S. Zani Total: ~1.5 FTE needed at least 3 –DB (Oracle for Castor & RLS test, Main Tier 1 Hardware db) B. Martelli, E. Vilucchi, P.P. Ricci Total: ~1 FTE needed at least 2 Required at least 3 more FTE staff people (1 for each topic!)

CASTOR issues (1) At present STK library with 6XLTO2 and 2X9940B drives –1200 x 200GB LTO-2 tapes => 240TB (used only 15%!) –680 x 200GB 9940B tapes => 136TB (free) –Upgrade with other N B drives and 2500 x 200GB tapes (500TB) not before 2nd half of 2005 In general CASTOR performances (as other HSM software) increase with clever pre-staging of files (ideally ~ 90%) LTO-2 drives not usable in a real production environment with present CASTOR release –hangs on locate/fskip occur every not-sequencial reading operations or checksum and not terminated tape (RDONLY) every 50/100GB data written (also STK assistance is needed) –Usable only with medium filesize of 20 MB or more –Good reliability on optimized (sequential or prestaged) operations –Fixes with CASTOR v.2 (Q2 2005) ? CERN and PIC NEVER reported HW problems with 9940B drives during last year data-challenges.

CASTOR issues (2) Better choose the media according to the type of files written to CASTOR. Trigger (with experiment coordination) a temporary increase of the staging area disk buffer and an optimized sequencial parallel stage-in of data. –Access to data over rfio or grid tools on CASTOR –high probability to find the file directly on disk (see LHCB). So we strongly encourage the LHC experiments the use of our Castor tape resource and suggest to the other experiments (BABAR,CDF,VIRGO,AMS,MAGIC) the possibility to have a local backup copy of their disk storage also on our LTO-2 tapes using Castor. This could help in case of disk data loss/corruption.

Database present Status Database backend in production at Tier1: –Oracle (CASTOR) - P.P.Ricci, E. Vilucchi –PostgreSQL (Tier1’s HW resources database) - B. Martelli Partecipation in LCG 3D project –B. Martelli, E. Vilucchi (~ 1 FTE) –2 Oracle 10g servers for replication tests with CERN –R&D on Oracle Streams Replication –Support on Oracle 10g DB server installation

Database 2005 activity LCG 3D project (B. Martelli, E. Vilucchi) –CNAF joining test bed: 2005 Q1 –Begin of production phase at CNAF: 2005 Q3 –Setup of a RedHat Cluster running Oracle 10g in failover configuration: 2005 Q3 (+ P.P. Ricci) CASTOR –Oracle database migration to 10g: 2005 Q1-Q2 (B. Martelli, E. Vilucchi, P.P. Ricci) –Setup of an Oracle Real Application Cluster for CASTOR: 2005 Q3 (B. Martelli, E. Vilucchi, P.P. Ricci) Deployment of Tier1’s storage resources database in production environment (B. Martelli) INFN administration database replica at CNAF –environment setup 2005 Q1-Q2 –deployment in production environment: 2005 Q2-Q3