Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.

Slides:



Advertisements
Similar presentations
Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Advertisements

ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Silicon Graphics, Inc. Cracow ‘03 Grid Workshop SAN over WAN - a new way of solving the GRID data access bottleneck Dr. Wolfgang Mertz Business Development.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
Network+ Guide to Networks, Fourth Edition Chapter 1 An Introduction to Networking.
Passive traffic measurement Capturing actual Internet packets in order to measure: –Packet sizes –Traffic volumes –Application utilisation –Resource utilisation.
TRIUMF Site Report for HEPiX, SLAC, October 10-14,2005 TRIUMF SITE REPORT Corrie Kost Update since Hepix Spring 2005.
Peter Stefan, NIIF 29 June, 2007, Amsterdam, The Netherlands NIIF Storage Services Collaboration on Storage Services.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Bernd Panzer-Steindel, CERN/IT 2 * 50 Itanium Server (dual 1.3/1.5 GHz Itanium2, 2 GB mem) High Througput Prototype (openlab + LCG prototype) (specific.
Storage Survey and Recent Acquisition at LAL Michel Jouvin LAL / IN2P3
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
ASGC 1 ASGC Site Status 3D CERN. ASGC 2 Outlines Current activity Hardware and software specifications Configuration issues and experience.
Best Practices for Backup in SAN/NAS Environments Jeff Wells.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
NIKHEF Test Bed Status David Groep
MURI Hardware Resources Ray Garcia Erik Olson Space Science and Engineering Center at the University of WI - Madison.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
November 2, 2000HEPiX/HEPNT FERMI SAN Effort Lisa Giacchetti Ray Pasetes GFS information contributed by Jim Annis.
Tape Storage Performance Jonathan Maye-Hobbs William Buehler 2011 Computer System, Cluster, and Networking Summer Institute.
TRIUMF Site Report for HEPiX/HEPNT, Vancouver, Oct20-24/2003 – Corrie Kost TRIUMF SITE REPORT Corrie Kost Head Scientific Computing.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
POW : System optimisations for data management 11 November 2004.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
EGEE is a project funded by the European Union under contract IST HellasGrid Hardware Tender Christos Aposkitis GRNET EGEE 3 rd parties Advanced.
NL Service Challenge Plans Kors Bos, Sander Klous, Davide Salomoni (NIKHEF) Pieter de Boer, Mark van de Sanden, Huub Stoffers, Ron Trompert, Jules Wolfrat.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
Computing at Norwegian Meteorological Institute Roar Skålin Director of Information Technology Norwegian Meteorological Institute CAS.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Cluster Configuration Update Including LSF Status Thorsten Kleinwort for CERN IT/PDP-IS HEPiX I/2001 LAL Orsay Tuesday, December 08, 2015.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
Status SC3 SARA/Nikhef 20 juli Status & results SC3 throughput phase SARA/Nikhef Mark van de Sanden.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Service Challenge Meeting “Review of Service Challenge 1” James Casey, IT-GD, CERN RAL, 26 January 2005.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Virtual Server Server Self Service Center (S3C) JI July.
Monterey HPDC Workshop Experiences with MC-GPFS in DEISA Andreas Schott
Status of the NL-T1. BiG Grid – the dutch e-science grid Realising an operational ICT infrastructure at the national level for scientific research (e.g.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Open-E Data Storage Software (DSS V6)
“A Data Movement Service for the LHC”
OpenLab Enterasys Meeting
NL Service Challenge Plans
Service Challenge 3 CERN
Simulation use cases for T2 in ALICE
The INFN Tier-1 Storage Implementation
The History of Operating Systems
CASTOR: CERN’s data management system
Network+ Guide to Networks, Fourth Edition
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005

Peter Michielse 2 Contents  High-level set-up at SARA  Storage details  December 2004 situation for Service Challenge  Service Challenge tests and possible configurations (with MB/s in mind for July 2005)

Peter Michielse 3 High-level set-up at SARA - 1

Peter Michielse 4 High-level set-up at SARA - 2 SurfNet NL SurfNet Cern Extension to Almere: Disk Storage Mass Storage, including SL8500 and tape drives IA32 cluster 14 TB RAID5 4x 9840C 3x 9940B STK 9310 P1 JumboP4A7 IRIX and Linux64 nodes SAN Amsterdam: 6x 16-port and 3x 32-port FC switches IRIX Other nodes (AIX, Linux32)

Peter Michielse 5 Storage Details - 1

Peter Michielse 6 Storage Details - 2 P1Jumbo P1 is SGI Origin p (MIPS), 32 GB Part of 1024p TERAS IRIX (SGI Unix) CXFS since 2001 (SGI’s shared file system) Interactive node for users to test and submit their jobs Has been used so far as the Grid Storage Element Jumbo is SGI Origin350 4p (MIPS), 4 GB IRIX (SGI Unix) CXFS MetaDataServer DMF/TMF (SGI’s hierarchical storage manager)

Peter Michielse 7 Storage Details - 3 DMF/TMF (SGI’s hierarchical storage manager) Free Space Dual-state files Regular files 0% 100% Free space minimum Free space target Migration target Threshold-driven migration events

Peter Michielse 8 December 2004 situation for Service Challenge x HP rx2600 2x 1.5GHz Itanium-2 1 GE to LAN 1 GE to WAN GE switch 1 GE each 10 GE Router SurfNet 4x 1GE To Amsterdam Coming out from CERN T0.....

Peter Michielse 9 December 2004 situation for Service Challenge - 2 SurfNet, 4x 1GE from CERN This was the situation so far: Disk-to-disk P1 (the Grid SE) is part of a general purpose production facility, so difficult to experiment with Uses proprietary OS (SGI IRIX), while tools are based on RH 7.3 Limited by disk storage Relatively old Gbit cards in P1 Planning not optimal DMF HSM seems to be an advantage Arriving in Amsterdam T1 P1 IRIX 14 TB RAID5 4x x 9940 STK x 16-port and 3x 32-port FC switches 4 Gbit/s for test available

Peter Michielse 10 Service Challenge tests until July  Timeline: WhenCERN T0SARA/NIKHEF T1 January-March MB/sSet up and test initial configurations April-June 05Set up preferred configuration MB/s in place July MB/sDemonstrate: MB/s – 1 month CERN Disk to SARA/NIKHEF Tape  We need to figure out an alternative to:

Peter Michielse 11 Service Challenge tests until July 2005 – 2: Ideas  Separation of tasks:  Servers that handle incoming data traffic from T0  Servers that handle outgoing data traffic to T2’s  Servers that handle mass storage (tape)  Consequences for storage environment:  Direct Attached Storage  Integrated storage: SAN, global file system  Layers of disks  Hardware and software choices

Peter Michielse 12 Service Challenge tests until July  Several alternatives - 1 Pro: Simple hardware Some scalability Con: No separation of incoming (T0) and outgoing (T2) data No integrated storage environment Tape drives fixed to host/ file system

Peter Michielse 13 Service Challenge tests until July  Several alternatives - 2 Pro: Simple hardware More scalability Separation of incoming (T0) and outgoing (T2) data Con: More complex setup, so more expensive No integrated storage environment Tape drives fixed to host/ file system

Peter Michielse 14 Service Challenge tests until July  Several alternatives - 3 Pro: Simple hardware More scalability Separation of incoming (T0) and outgoing (T2) data HSM involved, tape drives not fixed to host/file system Integrated storage environment Con: More complex setup, so more expensive Possible dependance on HSM

Peter Michielse 15 Summary  Choices to be made  Based on tests in 1HCY2005  Hardware, software and configuration  First tests with available Opteron cluster, as in alternative 1, not yet with tape drives  Subsequently testing of other alternatives