Storage and Storage Access 1 Rainer Többicke CERN/IT.

Slides:



Advertisements
Similar presentations
Data Storage Solutions Module 1.2. Data Storage Solutions Upon completion of this module, you will be able to: List the common storage media and solutions.
Advertisements

Storage Networking Strategic Decision-Making Randy Kerns Evaluator Group, Inc.
NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
Network Storage and Cluster File Systems Jeff Chase CPS 212, Fall 2000.
© 2004 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice InfiniBand Storage: Luster™ Technology.
© 2006 EMC Corporation. All rights reserved. Network Attached Storage (NAS) Module 3.2.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Network-Attached Storage
6/2/2015Bernd Panzer-Steindel, CERN, IT1 Computing Fabric (CERN), Status and Plans.
18. November 2003Bernd Panzer, CERN/IT1 LCG Internal Review Computing Fabric Overview.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Storage Networking. Storage Trends Storage growth Need for storage flexibility Simplify and automate management Continuous availability is required.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Amin Kazempour Long Yunyan XU
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Enterprise Storage Management Steve Duplessie Founder/Senior Analyst The Enterprise Storage Group September, 2001.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
CERN IT Department CH-1211 Genève 23 Switzerland t R&D Activities on Storage in CERN-IT’s FIO group Helge Meinhard / CERN-IT HEPiX Fall 2009.
Object-based Storage Long Liu Outline Why do we need object based storage? What is object based storage? How to take advantage of it? What's.
Design and Implementation of a Linux SCSI Target for Storage Area Networks Ashish A. PalekarAnshul Chaddha, Trebia Networks Narendran Ganapathy, 33 Nagog.
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Access Across Time: How the NAA Preserves Digital Records Andrew Wilson Assistant Director, Preservation.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
Large Scale Test of a storage solution based on an Industry Standard Michael Ernst Brookhaven National Laboratory ADC Retreat Naples, Italy February 2,
November 2, 2000HEPiX/HEPNT FERMI SAN Effort Lisa Giacchetti Ray Pasetes GFS information contributed by Jim Annis.
Storage Tank in Data Grid Shin, SangYong(syshin, #6468) IBM Grid Computing August 23, 2003.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
1 Public DAFS Storage for High Performance Computing using MPI-I/O: Design and Experience Arkady Kanevsky & Peter Corbett Network Appliance Vijay Velusamy.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
S.Jarp CERN openlab CERN openlab Total Cost of Ownership 11 November 2003 Sverre Jarp.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
VMware vSphere Configuration and Management v6
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
EMC Proven Professional. Copyright © 2012 EMC Corporation. All Rights Reserved. NAS versus SAN NAS – Architecture to provide dedicated file level access.
Mr. P. K. GuptaSandeep Gupta Roopak Agarwal
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Internet Protocol Storage Area Networks (IP SAN)
COMPASS Computerized Analysis and Storage Server Iain Last.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
STORAGE ARCHITECTURE/ MASTER): Where IP and FC Storage Fit in Your Enterprise Randy Kerns Senior Partner The Evaluator Group.
Storage Networking. Storage Trends Storage grows %/year, gets more complicated It’s necessary to pool storage for flexibility Intelligent storage.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
26. Juni 2003Bernd Panzer-Steindel, CERN/IT1 LHC Computing re-costing for for the CERN T0/T1 center.
GPFS Parallel File System
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CASTOR: possible evolution into the LHC era
Network Attached Storage Overview
WP18, High-speed data recording Krzysztof Wrona, European XFEL
NAS Vendors NAS Vendors Introduction
Bernd Panzer-Steindel, CERN/IT
Introduction to Networks
Ákos Frohner EGEE'08 September 2008
Computing Infrastructure for DAQ, DM and SC
AFS at CERN and other High Energy Physics Sites
CASTOR: CERN’s data management system
Presentation transcript:

Storage and Storage Access 1 Rainer Többicke CERN/IT

Storage and Storage Access 2 Introduction Data access –Raw data, analysis data, software repositories, calibration data –Small files, large files –Frequent access –Sequential access, random access Large variety

Storage and Storage Access 3 Usage ServicesSpeed (stream/aggr) SizeAccess Tape robot… 5 GB/s PBSeq, 100s/s CDR 100 MB/s 50 GB/s 100s of TB Random, 1000/s Analysis Back up10 MB/s 1GB/s 10s of TBRandom, 10000/s Home AFS CASTOR

Storage and Storage Access 4 Plan A – CASTOR & AFS AFS for software distribution & home CASTOR for Central Data Recording, Processing Mass Storage –Disk layer, Tape layer Analysis –Combination of the AFS & CASTOR –Performance enhancements for AFS

Storage and Storage Access 5 AFS – Andrew file system In CERN since 1992 ~7TB, users, ~20 servers Secure, wide area Good for high access rate, small files Open Source, community-supported –Developed at CMU under IBM grant –enhancements in Stability and Performance Service run by 2.2 staff at CERN

Storage and Storage Access 6 CASTOR HSM System being developed at CERN Tape server layer –Robotics (e.g. STK), tape devices (e.g. 9940) –2 PB data, 13 million files Disk buffer layer –~250 disk servers (~200 TB) Policy-based tape & disk management Development 4 staff, operation 5 staff

Storage and Storage Access 7 CASTOR & AFS Plans CASTOR 'Stager' rewrite –Design for performance and manageability Demonstrated new concept October 2003 –Security –Demonstrated pluggable scheduler AFS development –Performance enhancements –“Object” disk support

Storage and Storage Access 8 Plan B – Cluster File Systems Replacement for AFS Replacement for CASTOR disk server layer Replacement for CASTOR Basis for front-end to Storage-Area- Network-based storage

Storage and Storage Access 9 Shared File System Issues Optimization for a variety of access patterns –random/stream, tx rate, file sizes, data reuse Interface / Semantics / Platforms Security & trust model Scaling Operation –Policy-based management –Monitoring –Resilience –Reconfiguration

Storage and Storage Access 10 Storage (Hardware aspects) File server with locally attached disks –PC-based [IDE] disk server – Network Attached Storage appliance Fibre channel fabric –Confined to a Storage Area Network –'exporters' for off-SAN access –Robustness, manageability iSCSI – SCSI protocol encapsulated in IP

Storage and Storage Access 11 Storage Model (Software aspects) NAS - data & “control” on same path –“control”: topology, access control, space mgmt SAN - data & “control” on separate paths –Performance Object Storage –Disks contain “objects”, not just blocks –Thin control layer, space mgmt –Thin authorization layer => Security!

Storage and Storage Access 12 Selection Dozens of experimental file systems Evaluations and tests Data challenges at CERN Hardware, Software technology, Benchmarks Industry: Openlab collaboration with IBM –Storage Tank Institutes: collaboration with CASPUR (Rome) –ADIC Storenext, DataDirect Search for “industrial strength” solutions

Storage and Storage Access 13 Candidates - I IBM SANFS (Storage Tank) –SAN based FCP & iSCSI support –Clustered Metadata servers –Policy-based lifecycle data management –Heterogeneous, native FS semantics –Under development CERN 1 st installation outside IBM, limited functionality in Rel.1, cluster-security model –Scaling?

Storage and Storage Access 14 Candidates - II Lustre –Object Storage Implemented on Linux servers “Portals” interface to IP, Infiniband, Myrinet, RDMA –Metadata cluster –Open Source, backing by HP

Storage and Storage Access 15 Candidates - III NFS –NAS model, Unix standard –Use case: access to exporter farm SAN file systems – basis for exporter farm –Storenext (ADIC) –GFS (Sistina) –DAFS – SNIA model Panassas – object storage based

Storage and Storage Access 16 Summary Ongoing development in improving of existing solution (CASTOR & AFS) –Limited AFS development Evaluation of new products has started –Expect conclusions by mid-2004