20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.

Slides:



Advertisements
Similar presentations
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Advertisements

Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
Frangipani: A Scalable Distributed File System C. A. Thekkath, T. Mann, and E. K. Lee Systems Research Center Digital Equipment Corporation.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
16/4/2004Storage Resource Sharing with CASTOR1 Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Terabyte IDE RAID-5 Disk Arrays David A. Sanders, Lucien M. Cremaldi, Vance Eschenburg, Romulus Godang, Christopher N. Lawrence, Chris Riley, and Donald.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
The GSI Mass Storage System TAB GridKa, FZ Karlsruhe Sep. 4, 2002 Horst Göringer, GSI Darmstadt
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
Summary of 1 TB Milestone RD Schaffer Outline : u Goals and assumptions of the exercise u The hardware constraints u The basic model u What have we understood.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
4-8 th October 1999CERN Site Report, HEPiX SLAC. A.Silverman CERN Site Report HEPNT/HEPiX October 1999 SLAC Alan Silverman CERN/IT/DIS.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
CSU - DCE Webmaster I Scaling Issues - Fort Collins, CO Copyright © XTR Systems, LLC Web Site Scaling Issues (or Size Really Does Matter) Instructor:
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
TiBS Fermilab – HEPiX-HEPNT Ray Pasetes October 22, 2003.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
R.Divià, CERN/ALICE Challenging the challenge Handling data in the Gigabit/s range.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT.
NT Services for UNIX - first impressions Burkhard Renk, Uni Mainz.
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Storage & Database Team Activity Report INFN CNAF,
NASA Langley Research Center’s Distributed Mass Storage System (DMSS) Juliet Z. Pao Guest Lecturing at ODU April 8, 1999.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
ALICE Computing Data Challenge VI
NL Service Challenge Plans
PC Farms & Central Data Recording
Experiences and Outlook Data Preservation and Long Term Analysis
SAM at CCIN2P3 configuration issues
CASTOR: CERN’s data management system
Presentation transcript:

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives Port of mover to Digital Unix then started mostly functionality testing, no unexpected problems/delays API - simple & slow, or complex & fast sequential, needs application changes  Modified existing CERN RFIO interface to use fast HPSS interface to profit from existing investment  Services started August 98 using HPSS 3.2

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 2 CERN files - where do they go?  AFS: small files up to 20MB home directories and some project files  HPSS: medium sized files > 20 MB, 20 MB, < 10 GB used to be on user-managed tapes! rfcp local-file hpsssrv1:/hpss/cern.ch/user/l/loginid/hpss-file stagein -M /hpss/cern.ch/user/l/login/hpss-file link-file hsm -x get /hpss/cern.ch/user/l/login/hpss-file local-file “Transparent” to Objectivity DB users using modified SLAC AMS/HPSS interface  HPSS: Data taking for some experiments and test beams performance and reliability are very important files often go into the wrong COS

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 3 Production hardware HPSS 3.2  IBM F50 RS/6000 main server (ns, bfs, etc.) 512 MB RAM, 2 cpus, AIX Fastethernet  2 IBM F50 RS/6000, disk & tape server 256 MB RAM, 2 cpus, AIX IBM 3590 drives, 10 GB cartridges 4 x 7 x 18 GB disks mirrored 344 GB total space in HPSS storage class GB total space in HPSS storage class 4 (2 tape copies) HIPPI & Gigabit  2 COMPAQ Alpha 4100, disk & tape server 512 MB RAM, 4 cpus, Digital unix 4.0D 2 STK Redwood drives, 10GB, 25GB, 50B cartridges 2 x 7 x 18 GB disks mirrored 240 GB total space in HPSS storage class 2 HIPPI & Gigabit

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 4 Network HPSSSRV1 HPSS server HPSS1D02 HPSS disk & tape Mover SHD56 HPSS disk & tape Mover Hippi Fast Ethernet SHD55 HPSS disk & tape Mover Gigabit Hippi Gigabit IBM RS/6000 F50 DEC AlphaServer 4100 Client Any Platform Fast Ethernet Hippi Chorus, NA57, etc... HPSS1D01 HPSS disk & tape Mover IBM RS/6000 F50 Gigabit Hippi Gigabit Hippi

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 5 HPSS Performance IBM F50 mirror Ultra SCSI 3590 IBM F50 mirror Ultra SCSI 3590 HIPPI 25 MB/s

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 6 Data currently in HPSS at CERN  Mixed sized files 1 TB, files, 60MB average  Raw Experimental Data and Test Beam Files NA57 raw data: 2 TB, 3000 files (to be repeated) Atlas Objectivity test data: 1 TB in 2000 files (to be repeated) CMS 700 GB, 5000 test beam files, 140 MB average sometimes 3 TB in one day  Total 10 TB, files, tape mounts / day

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 7 Current and Future Work  Ongoing work with HPSS: ALICE data challenge largely successful, will be repeated test scalability of the name space to several million files upgrade to for new function & Y2K when Encina TX arrives!  Successful service but too soon to commit to HPSS: Complete COMPAQ port now underway - and Solaris port coming BABAR started with HPSS & Objectivity so we will learn from them CERN stager enhancement program (CASTOR) well under way  Will run HPSS till end 2000 with modest expansion and some stress testing to complete evaluation  Limited volume of real data in HPSS could be exported to another system if final decision is to stop HPSS

20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 8 HPSS Wish List  Short Term Encina TX series 4.2, so can move/port to HPSS Better monitoring information - help with Redwood problems New changecos option helps, but other improvements needed  Long Term Non-DCE movers Movers running on Linux Improved random access and small-file migration performance Guaranteed input rate ways to avoid disk contention problems number of tape drives dedicated to a COS avoid stopping PVL etc. to change configuration multiple HPSS systems on same license