S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE 6 TB SSA Disk StorageTek Tape Libraries 830 GB MaxStrat.

Slides:



Advertisements
Similar presentations
A quick introduction to SANs and Panasas ActivStor
Advertisements

Data Storage Solutions Module 1.2. Data Storage Solutions Upon completion of this module, you will be able to: List the common storage media and solutions.
1 Jason Drown Mark Rodden (Redundant Array of Inexpensive Disks) RAID.
RAID Redundant Array of Independent Disks
NPACI Parallel Computing Institute August 19-23, 2002 San Diego Supercomputing Center S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED.
S an D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE NPACI Parallel Computing Seminars San Diego Supercomputing.
MUNIS Platform Migration Project WELCOME. Agenda Introductions Tyler Cloud Overview Munis New Features Questions.
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
1 CSC 486/586 Network Storage. 2 Objectives Familiarization with network data storage technologies Understanding of RAID concepts and RAID levels Discuss.
RAID CS5493/7493. RAID : What is it? Redundant Array of Independent Disks configured into a single logical storage unit.
Storage area Network(SANs) Topics of presentation
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 6: Accessing large amount.
N ATIONAL E NERGY R ESEARCH S CIENTIFIC C OMPUTING C ENTER 1 Comparison of Communication and I/O of the Cray T3E and IBM SP Jonathan Carter NERSC User.
Storage Area Network (SAN)
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Servers Redundant Array of Inexpensive Disks (RAID) –A group of hard disks is called a disk array FIGURE Server with redundant NICs.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
CNT-150VT. Question #1 Your name Question #2 Your computer number ##
1 A Basic R&D for an Analysis Framework Distributed on Wide Area Network Hiroshi Sakamoto International Center for Elementary Particle Physics (ICEPP),
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
RAID REDUNDANT ARRAY OF INEXPENSIVE DISKS. Why RAID?
CASPUR SAN News Andrei Maslennikov Orsay, April 2001.
S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE On pearls and perils of hybrid OpenMP/MPI programming.
Hotfoot HPC Cluster March 31, Topics Overview Execute Nodes Manager/Submit Nodes NFS Server Storage Networking Performance.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Collective Buffering: Improving Parallel I/O Performance By Bill Nitzberg and Virginia Lo.
Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
1 Oak Ridge Interconnects Workshop - November 1999 ASCI ASCI ASCI Terascale Simulation Requirements and Deployments David A. Nowak ASCI.
Basic Computer Network. TOPOLOGI  Topologi fisik.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
PHENIX Computing Center in Japan (CC-J) Takashi Ichihara (RIKEN and RIKEN BNL Research Center ) Presented on 08/02/2000 at CHEP2000 conference, Padova,
D0 Disk Array Replacement on d0ora2 May 20, 2005.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
Click to add text Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 6: Accessing.
National Energy Research Scientific Computing Center (NERSC) HPC In a Production Environment Nicholas P. Cardo NERSC Center Division, LBNL November 19,
RAID Arrays A short summary for TAFE. What is a RAID A Raid Array is a way of protecting data on a hard drive by using “redundancy” to repeat data across.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
BlueWaters Storage Solution Michelle Butler NCSA January 19, 2016.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Office of Science U.S. Department of Energy NERSC Site Report HEPiX October 20, 2003 TRIUMF.
Network-Attached Storage D ISK A TTACHMENT. Computers access disk storage in two ways: 1) Host attached via an I/O port 2) Network attached via a network.
© Copyright 2004 Instrumental, Inc I/O Types and Usage in DoD Henry Newman Instrumental, Inc/DOD HPCMP/DARPA HPCS May 24, 2004.
HARD DISKS. INTRODUCTION TO HARD DISKS  Hard disk is the core fundamental component of the Computer system.  A mass storage device that stores the permanent.
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
Step-by-Step Guide to Asynchronous Data (File) Replication (File Based) over a WAN Supported by Open-E ® DSS™ Software Version: DSS ver up85 Presentation.
Network-Attached Storage. Network-attached storage devices Attached to a local area network, generally an Ethernet-based network environment.
Dirk Zimoch, EPICS Collaboration Meeting October SLS Beamline Networks and Data Storage.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Experience of Lustre at QMUL
PC Farms & Central Data Recording
Experience of Lustre at a Tier-2 site
The INFN Tier-1 Storage Implementation
JDAT Production Hardware
San Diego Supercomputer Center
IST346: Storage and File Systems
Presentation transcript:

S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE 6 TB SSA Disk StorageTek Tape Libraries 830 GB MaxStrat Disk Array SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID Silver Wide SP SwitchHigh Performance Gateway Node (HPGN) HiPPI Switch RAID 100Mb Network Gig-E Switch WH Wide WH Thin WH Thin WH Thin WH Thin WH Thin WH Thin WH Wide 8 STK 9840 Tape Drives 8 STK 9840 Tape Drives 8 STK 9840 Tape Drives 8 STK 9840 Tape Drives 8 IBM 3590 Tape Drives 8 IBM 3590 Tape Drives Mass Storage (HPSS) Configuration To Terascale Computer ATM Switch WH Wide WH Thin WH Thin WH Thin WH Thin WH Thin WH Thin WH Wide WH Thin WH Thin

S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE HPSS Configuration SP system with 20 nodes; 8 silver wides, 4 Winterhawk wides, and 8 Winterhawk thins Direct network connectivity through 100Mb ether, GB ether, and HiPPI. ATM via HPGN. Close to 7 TB of disk as SSA raid or MaxStrat raid 48 tape drives; 16 IBM 3590 and 32 STK 9840 Striping essential to required performance

S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Gig-E Switch HPSS WH Node HPSS WH Node HPSS WH Node HPSS WH Node NH Node “Router” NH Node “Router” NH Node “Router” NH Node “Router” 9840 Tape Drives HPSS Terascale SP2 SP Switch 8 way striped data transfers to HPSS

S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE Striped Data Transfers to HPSS Machines are logically sub-netted at the router Terascale machine organized in 4 network ‘quadrants’ HPSS servers divided across 2 networks “Router” nodes do network I/O to hpss on behalf of remaining nodes in a quadrant. Striping is dependent on GPFS

S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID SSA RAID NH Node “RVSD” RIO “RVSD” RIO “RVSD” NH Node “RVSD” RIO “RVSD” RIO “RVSD” ABAB ABAB ABAB ABAB ABAB ABAB ABAB ABAB ABAB ABAB 12 RVSD servers (6 redundant pairs) 60 Drawers of [ 3 x (4+p) raid + HS ] = 960 x 36GB disks

S AN D IEGO S UPERCOMPUTER C ENTER N ATIONAL P ARTNERSHIP FOR A DVANCED C OMPUTATIONAL I NFRASTRUCTURE RVSD Server Configuration 12 RVSD servers in the system in 6 redundant pairs Each server drives 5 drawers of disk as primary with 5 additional during failover One drawer per loop configured as three 4+p arrays with a hot spare disk Total of 60 drawers of 36GB disks (960 disks)