Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT.

Slides:



Advertisements
Similar presentations
Object Persistency & Data Handling Session C - Summary Object Persistency & Data Handling Session C - Summary Dirk Duellmann.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Objectivity Data Migration Marcin Nowak, CERN Database Group, CHEP 2003 March , La Jolla, California.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Summary of 1 TB Milestone RD Schaffer Outline : u Goals and assumptions of the exercise u The hardware constraints u The basic model u What have we understood.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
CERN/IT/DB A Strawman Model for using Oracle for LHC Physics Data Jamie Shiers, IT-DB, CERN.
4-8 th October 1999CERN Site Report, HEPiX SLAC. A.Silverman CERN Site Report HEPNT/HEPiX October 1999 SLAC Alan Silverman CERN/IT/DIS.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
NL Service Challenge Plans
PC Farms & Central Data Recording
Bernd Panzer-Steindel, CERN/IT
SAM at CCIN2P3 configuration issues
UK GridPP Tier-1/A Centre at CLRC
CASTOR: CERN’s data management system
Using an Object Oriented Database to Store BaBar's Terabytes
Short to middle term GRID deployment plan for LHCb
Presentation transcript:

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 2 Rutherford Appleton Lab Mass Storage session Final Agenda Fri April 16, Organiser: F.Gagliardi Org. Speaker Title FNAL D. Petravick Status of the ENSTORE Project and HPSS at FNAL Quadrics D. Roweth The EuroStore Project CERN J.-P. Baud The CASTOR Project SLAC C. Boeheim Mass Storage for BABAR at SLAC IN2P3 R. Rumler BABAR Storage at Lyon INFN E. Leonardi INFN Regional Storage Center for BABAR (Rome) CERN H Renshall. HPSS Experience at CERN

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 3 FNAL D. Petravick Status of the ENSTORE Project and HPSS at FNAL Enstore is a data management system to be used for Run II. An integration of tools from Desy, using their ‘Perfectly Normal File System’ to manage a file store name space, and FNAL. Users copy files to and from store with a parallelisable encp command which will store the original file name as part of the eventual tape file (cpio format) Will exploit commodity tape drives - tens of drives for D0. Can create and import tapes made outside the system. Delivery tied to Run II schedule - one Enstore per experiment. Run II will use Gigabit Ethernet. Will add second EMASS robot :total 7 towers, 4 arms, cartridges For serial media considering Mammoth 2, AIT and DLT HPSS service very up to date - HPSS Looking at Y2K issues Added driveless frame to IBM 3494 robot but usage less than 0.2 TB/week

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 4 Quadrics: D. Roweth The EuroStore Project Initial Design and first Results Goal to develop a compact, scalable high performance storage system based on existing components and long term experience. EU Consortium - full partners DESY, CERN, Quadrics Supercomputer, Hellenic Space Agency and associates are Athens medical centre, Hellenic weather and Tera foundation (medical applications of accelerators). Manpower 33 person years and total cost 3 M ECU from March years User requirements include less than 5% data rate reduction in use, tens of TB capacity, no maintenance breaks, automatic data base crash recovery, self describing media contents, can scale to PB, at least 2**64 files, several thousand clients, GB/sec aggregate rate from 10 to 100 MB/sec streams. QSW working on Parallel File System extensions Hierarchical Storage Manager being implemented in Java at Desy Sequential PUT/GET of complete file to/from multi- level hierarchy Emphasis on HEP access profile CERN hosting QSW prototype since April 99 for access by Industrial partners Core implementation 99% complete and PFS-HSM integration done Prototype feedback and remaining features (HSM migration) now started

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 5 CERN J.-P. Baud The CASTOR Project Current SHIFT (CERN Staging system) problems: –more than 10 MB/s per stream is hard to achieve –stager catalog does not scale to over files –no automatic allocation of tapes or access to files by name –no automatic migration of data between disk and tape OSM rejected, HPSS being evaluated, EUROSTORE being closely observed CERN Advanced Storage Manager to be able to handle LHC data in a fully distributed environment with good sequential and random access performance –high performance and scalability –modular to be able to replace components or integrate commercial products –provide HSM functionality (name space, migrate/recall) –all Unix and NT platforms and most SCSI tape drives and robotics –easy to clone and deploy and integrate new technologies as they emerge

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 6 CASTOR (2) - Integration of emerging technologies Plan to demonstrate that Linux PCs can be used as tape or disk servers –port of SHIFT software and STK CSC toolkit completed –Linux tape driver modified –Gigabit and HIPPI drivers written at CERN –each PC to support 3/4 drives of STK 9840 (10GB) or Redwood (50GB) (Remark: in production at CERN since May) Plan to use Storage Area Networks to decrease the number of data movers –CPU servers directly connected to and sharing SAN disks –classical disk servers no longer needed –an emerging technology where we need to gain expertise –hierarchical storage management function will still be required Deployment time scales –Linux tape servers in Spring 99 –Test new stager catalog (with Delphi) in Spring 99 –test new data mover in summer 99 and full COMPASS test (35 MB/s) autumn 99 –end 99/early 2000 test with Alice at 100 MB/s –end 99/early 2000 first HSM prototype and SAN testing/integration

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 7 SLAC C. Boeheim Mass Storage for BABAR at SLAC BABAR uses both Objectivity OO Database and HPSS storage manager Daq contains 78 Sun systems with several days disk buffer - not Objectivity Raw data pulled from the Daq both by reconstruction farm (220 Sun Ultra 5 with 200 more to come) and HPSS servers (each at 5MB/sec). Raw and reconstructed data stored in Objectivity DBs in separate federations Already hit limit of 1024 federation connections from only 28 farm workers (Remark: limit since fixed -was both in Solaris and (harder to fix) Objectivity) Farm nodes put data into several Objectivity AMS disk servers which use the OOFS/HPSS interface (via PFTP) to migrate to HPSS - 10 MB/s aggregate HPSS data movers currently IBM but will move to Sun when port is finished Recall of Objectivity DBs for analysis is to a different set of disk servers, implemented by using an Objectivity change DB command after storage They have observed their Objectivity lock servers are handling many more locks than thought and not performing very well. (Remark: this was confirmed recently by Chuck that locks are the current main bottleneck and can even stop the entire system)

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 8 IN2P3 R. Rumler BABAR Data Store at IN2P3 From SLAC expect 18TB in 99, 165 TB in 2000, 370 TB in Will run Objectivity and HPSS as at SLAC Will store both raw and reconstructed Objectivity DBs using OOFS/HPSS For Objectivity have Sun E4500 server, 200GB disk, Sun A7000 under study For HPSS have 4 processor IBM F50 with 1GB memory, 28 GB internal disk –16 external 4GB SSA disks for disk mover cache –3 redwood drives in STK robot –see networking at 11.5 MB/s, tape write at 8.5 MB/s, disk I/o at 9 MB/s Currently gaining experience of Objectivity, HPSS and their interface Full databases will be transferred from SLAC - –oocopydb at SLAC –tar and gzip to file sent via ftp or tape –unzip, untar and ooattachdb at IN2P3 –currently too slow (1 MB/s per redwood) and requires locking a federation during copy so will try to optimise or develop a special tool based on IN2P3 stager

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 9 INFN (Rome) E. Leonardi INFN Regional Storage Center for BABAR Farm at CASPUR will have SpecInt95 cpu servers on central switch accessing RAID disk, tape robots, Objectivity lock server Will only receive 10 TB/yr physics analysis data from SLAC –ESD (9 tb/yr) to be stored on tape –AOD and TAG (1 TB/yr) on RAID disks About 70 physicist users running 20 concurrent analysis jobs Most of the jobs reading AOD data from RAID disk - HPSS would be overkill Data exported from SLAC central DBs via DLT tapes Put into CASPUR mini-robot for re-import to local federation - under test now CASPUR has local stage system and easily modified the SLAC OOFS/HPSS interface to use this instead - all the standard HPSS calls have a 1 to 1 correspondence to stage commands and a few new (e.g. mv) had to be added

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 10 CERN H Renshall. HPSS Experience at CERN Test system installation in October 1997 –IBM AIX machines with IBM 3590 drives –Port of mover to Digital Unix started –mostly functionality testing, no unexpected problems/delays –HPSS has two levels of API - simple but slow as all I/o through master server or complex but fast (for sequential access) and needing application changes We decided to modify existing CERN RFIO interface to use the fast HPSS interface and profit from our existing investment in staging software/hardware Users move complete files to/from HPSS via: –rfcp local-file hpsssrv1:/hpss/cern.ch/user/l/loginid/remote-file or vice-versa –stagein -M /hpss/crn.ch/user/l/login/remote-file link-file or vice-versa –transparent to Objectivity DB users using modified SLAC AMS/HPSS interface Services started August 98 with 2 IBM servers each with two IBM 3590 drives and two COMPAQ servers each with two STK drives High reliability - twin system disks, mirrored data disks, two servers per class Currently 600GB CHORUS in 4000 files, 2 TB NA57 raw data in 3000 files 1TB Atlas Objectivity test data in 2000 files and a few hundred test beam files

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 11 HPSS Experience at CERN (2) Ongoing work with HPSS: –Alice data challenge to run at 100 MB/s for several days (30 MB/s so far) –test scalability of the name space to several million files –upgrade to version 4.1 for new functionality and Y2K compliance Successful service but too soon to commit to HPSS: –future of COMPAQ port in the product is unclear - Sun port coming –BABAR starting soon with HPSS and Objectivity so we will learn from them –Alice high data rate test delayed by COMPAQ mover performance –CERN stager enhancement program (CASTOR) well under way Will run HPSS two more years with modest expansion and some stress testing to complete evaluation Limited volume of real data in HPSS could be exported to another system if final decision is to stop HPSS

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 12 Draft Agenda of Next Meeting: SLAC October 8, 1999 Organiser: F.Gagliardi Org. Contact Person Title FNAL D.Petravick FNAL Mass Storage for Run II SLAC R.Mount SLAC Mass Storage for Babar CEBAF I.Bird CEBAF Mass Storage plans BNL B.Gibbard RHIC Mass Storage DESY M.Ernst Status of Eurostore LNF P.Franzini Data Storage for KLOE IN2P3 F.Etienne Mass Storage plans RAL J.Gordon Mass Storage plans INFN M.Mazzucato Mass Storage task force CERN E.Mcintosh Mass Storage plans