Presentation is loading. Please wait.

Presentation is loading. Please wait.

Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT.

Similar presentations


Presentation on theme: "Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT."— Presentation transcript:

1 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT

2 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 2 HEPiX'99 @ Rutherford Appleton Lab Mass Storage session Final Agenda Fri April 16, 1999 13.00-16.00 Organiser: F.Gagliardi Org. Speaker Title FNAL D. Petravick Status of the ENSTORE Project and HPSS at FNAL Quadrics D. Roweth The EuroStore Project CERN J.-P. Baud The CASTOR Project SLAC C. Boeheim Mass Storage for BABAR at SLAC IN2P3 R. Rumler BABAR Storage at Lyon INFN E. Leonardi INFN Regional Storage Center for BABAR (Rome) CERN H Renshall. HPSS Experience at CERN

3 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 3 FNAL D. Petravick Status of the ENSTORE Project and HPSS at FNAL Enstore is a data management system to be used for Run II. An integration of tools from Desy, using their ‘Perfectly Normal File System’ to manage a file store name space, and FNAL. Users copy files to and from store with a parallelisable encp command which will store the original file name as part of the eventual tape file (cpio format) Will exploit commodity tape drives - tens of drives for D0. Can create and import tapes made outside the system. Delivery tied to Run II schedule - one Enstore per experiment. Run II will use Gigabit Ethernet. Will add second EMASS robot :total 7 towers, 4 arms, 30000 cartridges For serial media considering Mammoth 2, AIT and DLT HPSS service very up to date - HPSS 4.1.1 Looking at Y2K issues Added driveless frame to IBM 3494 robot but usage less than 0.2 TB/week

4 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 4 Quadrics: D. Roweth The EuroStore Project Initial Design and first Results Goal to develop a compact, scalable high performance storage system based on existing components and long term experience. EU Consortium - full partners DESY, CERN, Quadrics Supercomputer, Hellenic Space Agency and associates are Athens medical centre, Hellenic weather and Tera foundation (medical applications of accelerators). Manpower 33 person years and total cost 3 M ECU from March 98+ 2 years User requirements include less than 5% data rate reduction in use, tens of TB capacity, no maintenance breaks, automatic data base crash recovery, self describing media contents, can scale to PB, at least 2**64 files, several thousand clients, GB/sec aggregate rate from 10 to 100 MB/sec streams. QSW working on Parallel File System extensions Hierarchical Storage Manager being implemented in Java at Desy Sequential PUT/GET of complete file to/from multi- level hierarchy Emphasis on HEP access profile CERN hosting QSW prototype since April 99 for access by Industrial partners Core implementation 99% complete and PFS-HSM integration done Prototype feedback and remaining features (HSM migration) now started

5 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 5 CERN J.-P. Baud The CASTOR Project Current SHIFT (CERN Staging system) problems: –more than 10 MB/s per stream is hard to achieve –stager catalog does not scale to over 10000 files –no automatic allocation of tapes or access to files by name –no automatic migration of data between disk and tape OSM rejected, HPSS being evaluated, EUROSTORE being closely observed CERN Advanced Storage Manager to be able to handle LHC data in a fully distributed environment with good sequential and random access performance –high performance and scalability –modular to be able to replace components or integrate commercial products –provide HSM functionality (name space, migrate/recall) –all Unix and NT platforms and most SCSI tape drives and robotics –easy to clone and deploy and integrate new technologies as they emerge

6 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 6 CASTOR (2) - Integration of emerging technologies Plan to demonstrate that Linux PCs can be used as tape or disk servers –port of SHIFT software and STK CSC toolkit completed –Linux tape driver modified –Gigabit and HIPPI drivers written at CERN –each PC to support 3/4 drives of STK 9840 (10GB) or Redwood (50GB) (Remark: in production at CERN since May) Plan to use Storage Area Networks to decrease the number of data movers –CPU servers directly connected to and sharing SAN disks –classical disk servers no longer needed –an emerging technology where we need to gain expertise –hierarchical storage management function will still be required Deployment time scales –Linux tape servers in Spring 99 –Test new stager catalog (with Delphi) in Spring 99 –test new data mover in summer 99 and full COMPASS test (35 MB/s) autumn 99 –end 99/early 2000 test with Alice at 100 MB/s –end 99/early 2000 first HSM prototype and SAN testing/integration

7 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 7 SLAC C. Boeheim Mass Storage for BABAR at SLAC BABAR uses both Objectivity OO Database and HPSS storage manager Daq contains 78 Sun systems with several days disk buffer - not Objectivity Raw data pulled from the Daq both by reconstruction farm (220 Sun Ultra 5 with 200 more to come) and HPSS servers (each at 5MB/sec). Raw and reconstructed data stored in Objectivity DBs in separate federations Already hit limit of 1024 federation connections from only 28 farm workers (Remark: limit since fixed -was both in Solaris and (harder to fix) Objectivity) Farm nodes put data into several Objectivity AMS disk servers which use the OOFS/HPSS interface (via PFTP) to migrate to HPSS - 10 MB/s aggregate HPSS data movers currently IBM but will move to Sun when port is finished Recall of Objectivity DBs for analysis is to a different set of disk servers, implemented by using an Objectivity change DB command after storage They have observed their Objectivity lock servers are handling many more locks than thought and not performing very well. (Remark: this was confirmed recently by Chuck that locks are the current main bottleneck and can even stop the entire system)

8 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 8 IN2P3 R. Rumler BABAR Data Store at IN2P3 From SLAC expect 18TB in 99, 165 TB in 2000, 370 TB in 2001+ Will run Objectivity and HPSS as at SLAC Will store both raw and reconstructed Objectivity DBs using OOFS/HPSS For Objectivity have Sun E4500 server, 200GB disk, Sun A7000 under study For HPSS have 4 processor IBM F50 with 1GB memory, 28 GB internal disk –16 external 4GB SSA disks for disk mover cache –3 redwood drives in STK robot –see networking at 11.5 MB/s, tape write at 8.5 MB/s, disk I/o at 9 MB/s Currently gaining experience of Objectivity, HPSS and their interface Full databases will be transferred from SLAC - –oocopydb at SLAC –tar and gzip to file sent via ftp or tape –unzip, untar and ooattachdb at IN2P3 –currently too slow (1 MB/s per redwood) and requires locking a federation during copy so will try to optimise or develop a special tool based on IN2P3 stager

9 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 9 INFN (Rome) E. Leonardi INFN Regional Storage Center for BABAR Farm at CASPUR will have 2-300 SpecInt95 cpu servers on central switch accessing RAID disk, tape robots, Objectivity lock server Will only receive 10 TB/yr physics analysis data from SLAC –ESD (9 tb/yr) to be stored on tape –AOD and TAG (1 TB/yr) on RAID disks About 70 physicist users running 20 concurrent analysis jobs Most of the jobs reading AOD data from RAID disk - HPSS would be overkill Data exported from SLAC central DBs via DLT tapes Put into CASPUR mini-robot for re-import to local federation - under test now CASPUR has local stage system and easily modified the SLAC OOFS/HPSS interface to use this instead - all the standard HPSS calls have a 1 to 1 correspondence to stage commands and a few new (e.g. mv) had to be added

10 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 10 CERN H Renshall. HPSS Experience at CERN Test system installation in October 1997 –IBM AIX machines with IBM 3590 drives –Port of mover to Digital Unix started –mostly functionality testing, no unexpected problems/delays –HPSS has two levels of API - simple but slow as all I/o through master server or complex but fast (for sequential access) and needing application changes We decided to modify existing CERN RFIO interface to use the fast HPSS interface and profit from our existing investment in staging software/hardware Users move complete files to/from HPSS via: –rfcp local-file hpsssrv1:/hpss/cern.ch/user/l/loginid/remote-file or vice-versa –stagein -M /hpss/crn.ch/user/l/login/remote-file link-file or vice-versa –transparent to Objectivity DB users using modified SLAC AMS/HPSS interface Services started August 98 with 2 IBM servers each with two IBM 3590 drives and two COMPAQ servers each with two STK drives High reliability - twin system disks, mirrored data disks, two servers per class Currently 600GB CHORUS in 4000 files, 2 TB NA57 raw data in 3000 files 1TB Atlas Objectivity test data in 2000 files and a few hundred test beam files

11 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 11 HPSS Experience at CERN (2) Ongoing work with HPSS: –Alice data challenge to run at 100 MB/s for several days (30 MB/s so far) –test scalability of the name space to several million files –upgrade to version 4.1 for new functionality and Y2K compliance Successful service but too soon to commit to HPSS: –future of COMPAQ port in the product is unclear - Sun port coming –BABAR starting soon with HPSS and Objectivity so we will learn from them –Alice high data rate test delayed by COMPAQ mover performance –CERN stager enhancement program (CASTOR) well under way Will run HPSS two more years with modest expansion and some stress testing to complete evaluation Limited volume of real data in HPSS could be exported to another system if final decision is to stop HPSS

12 Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 12 Draft Agenda of Next Meeting: SLAC October 8, 1999 Organiser: F.Gagliardi Org. Contact Person Title FNAL D.Petravick FNAL Mass Storage for Run II SLAC R.Mount SLAC Mass Storage for Babar CEBAF I.Bird CEBAF Mass Storage plans BNL B.Gibbard RHIC Mass Storage DESY M.Ernst Status of Eurostore LNF P.Franzini Data Storage for KLOE IN2P3 F.Etienne Mass Storage plans RAL J.Gordon Mass Storage plans INFN M.Mazzucato Mass Storage task force CERN E.Mcintosh Mass Storage plans


Download ppt "Focus 1 July 1999Summary of April 99 HepiX mass storage meeting 1 Summary of April 1999 HepiX mass storage meeting Focus 1 July 1999 H.Renshall PDP/IT."

Similar presentations


Ads by Google