Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM9.2.2000Padova.

Slides:



Advertisements
Similar presentations
Distributed Processing, Client/Server and Clusters
Advertisements

Chapter 7 LAN Operating Systems LAN Software Software Compatibility Network Operating System (NOP) Architecture NOP Functions NOP Trends.
Serverless Network File Systems. Network File Systems Allow sharing among independent file systems in a transparent manner Mounting a remote directory.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Objectivity Data Migration Marcin Nowak, CERN Database Group, CHEP 2003 March , La Jolla, California.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
Xuan Guo Chapter 1 What is UNIX? Graham Glass and King Ables, UNIX for Programmers and Users, Third Edition, Pearson Prentice Hall, 2003 Original Notes.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
Operating Systems.
CT NIKHEF June File server CT system support.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
Chapter 1 Chapter 1: Networking with Microsoft Windows 2000 Server.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
SUMA: A Scientific Metacomputer Cardinale, Yudith Figueira, Carlos Hernández, Emilio Baquero, Eduardo Berbín, Luis Bouza, Roberto Gamess, Eric García,
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
MOUNT10: Company, Products and Solutions ABAKUS Distribution, a.s. Jaroslav Techl
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
NT SECURITY Introduction Security features of an operating system revolve around the principles of “Availability,” “Integrity,” and Confidentiality. For.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Computing Division Requests The following is a list of tasks about to be officially submitted to the Computing Division for requested support. D0 personnel.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Andrei Moskalenko Storage team, Centre de Calcul de l’ IN2P3. HPSS – The High Performance Storage System Storage at the Computer Centre of the IN2P3 HEPiX.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Building and managing production bioclusters Chris Dagdigian BIOSILICO Vol2, No. 5 September 2004 Ankur Dhanik.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
CERN - European Organization for Nuclear Research FOCUS March 2 nd, 2000 Frédéric Hemmer - IT Division.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
AFS/OSD Project R.Belloni, L.Giammarino, A.Maslennikov, G.Palumbo, H.Reuter, R.Toebbicke.
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Quattor tutorial Introduction German Cancio, Rafael Garcia, Cal Loomis.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
Threads, SMP, and Microkernels Chapter 4. Processes and Threads Operating systems use processes for two purposes - Resource allocation and resource ownership.
CRISP WP18, High-speed data recording Krzysztof Wrona, European XFEL PSI, 18 March 2013.
An Introduction to GPFS
Chapter 16 Client/Server Computing Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CASTOR: possible evolution into the LHC era
Chapter 1: Introduction
NL Service Challenge Plans
PC Farms & Central Data Recording
UK GridPP Tier-1/A Centre at CLRC
Storage Virtualization
Chapter 1: Introduction
Introduction To Distributed Systems
Lecture Topics: 11/1 Hand back midterms
CS 295: Modern Systems Organizing Storage Devices
Presentation transcript:

Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova

2Ingo Augustin, CERN EuroStore  EC funded ESPRIT IV project A High Performance Storage Project EP  March August 2000  Participants: CERN (HEP) DESY (HEP) QSW (Super Computer Manufacturer) HCSA(Space Applications) AMC(Private Hospitals) HNMS(Meteorological Service) TERA(Medical Foundation)

3Ingo Augustin, CERN CERN Role  Definition of User Requirements LHC Central Data Recording LHC data storage, administration and handling  Prototype Assessment at CERN scalability performance reliability manageability  Liaison with HPSS synergy in data exchange (lack of interest of the HPSS consortium)

4Ingo Augustin, CERN User Requirements  Real Time Capabilities  dedicated disks, tape drives for CDR  dynamic allocation of resources  quota and priorities for users and groups  file size and number only limited by OS ?high reliability and availability  tape striping  General ?“multi-vendor” ?easy manageability  tape vaulting

5Ingo Augustin, CERN The EuroStore Software

6Ingo Augustin, CERN Parallel File System  Extended CS-2 like file system  Distributed over multiple nodes and disks  POSIX compliant  UNIX based initial implementation on SUN Solaris, later on DECUnix  Resource Management System  Planned RAID 5 extension

7Ingo Augustin, CERN Hierarchical Storage Manager  IEEE MSS model  JAVA based native THREADS (multi-threaded) easily portable  Objectivity V5 as metadata DB  Communication via ssh with Blowfish encryption  Configurable priorities, quotas, allocations, migration

8Ingo Augustin, CERN Prototype Platform  Hardware, PFS and HSM in April 99 at CERN  Platform (QM-1): 4 dual-processor SUN Enterprise 450 connection via ELAN/ELITE network (~ 250 MB/s) 4 x 4 8GB Cheetah SCSI-disks Gigabit Ethernet interfaces  Prototype restrictions movers on QM-1 nodes STK robotics with Redwood or 9840 only

9Ingo Augustin, CERN Standard Test Configuration

10Ingo Augustin, CERN Interest of CERN  Not the PFS one HSM demon per PFS only efficient with “non-commodity” network with optimized kernel

11Ingo Augustin, CERN Networking  A PFS is only useful with optimized TCP- stack  ELAN is far too expensive for large usage (~10 kCHF/port)

12Ingo Augustin, CERN Storage Mangement Common problems in existing systems: Manageability no static environment (new equipment almost all the time) monitoring (efficiency, problem detection) few operators Scalability number of demons database access Reliability hundreds of tape and disk servers fault tolerance Distributed performance (PC/Linux?) heterogeneous? regional centers?

13Ingo Augustin, CERN EuroStore HSM Status of “Proof of principle” Fault tolerant (eg. mover or PVR dead) Java works Data rates are limited by hardware only Data is safe (only one incident of data loss, cause has been fixed) Very good monitoring (jobs can be traced throughout the system) All problems found up to now were bugs, not design faults Several components still to be implemented

14Ingo Augustin, CERN

15Ingo Augustin, CERN Future  EuroStore will finish in August 2000 implementation of missing features stress testing Deployment at DESY (see next talk)  Proposal for follow-up submitted to EC in Jan commodity Linux, PC, low end storage (DVD-RAM-Jukebox…) network scalability additional clients SAN support (by STK)