Robotics and Tape Drives

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

Computing Infrastructure
IBM Software Group ® Integrated Server and Virtual Storage Management an IT Optimization Infrastructure Solution from IBM Small and Medium Business Software.
Lesson 3: Working with Storage Systems
XenData SXL-5000 LTO Archive System Turnkey video archive system with near-line LTO capacities scaling from 210 TB to 1.18 PB, designed for the demanding.
XenData SX-520 LTO Archive Servers A series of archive servers based on IT standards, designed for the demanding requirements of the media and entertainment.
XenData SX-10 LTO Archive Appliance An Archive Appliance based on IT standards, designed for the demanding requirements of the media and entertainment.
HP Virtual Tape Library (VTL) Appliance Powered by IPStor Ross Parker – Sales Director, Northern Europe.
XenData SXL-3000 LTO Archive System Turnkey video archive system with near-line LTO capacities scaling from 150 TB to 750 TB, designed for the demanding.
Exadata Distinctives Brown Bag New features for tuning Oracle database applications.
10/11/2006 Office of Information Technology Princeton University Educause 2006 Copyright Copyright Charles Augustine This work is the intellectual.
11 BACKING UP AND RESTORING DATA Chapter 4. Chapter 4: BACKING UP AND RESTORING DATA2 CHAPTER OVERVIEW Describe the various types of hardware used to.
PetaByte Storage Facility at RHIC Razvan Popescu - Brookhaven National Laboratory.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
© 2006 IBM Corporation IBM Systems Business Continuity and Disaster Recovery Part 2, the hardware Tim McMahon
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
RAL Site Report HEPiX Fall 2013, Ann Arbor, MI 28 Oct – 1 Nov Martin Bly, STFC-RAL.
Local Area Networks (LAN) are small networks, with a short distance for the cables to run, typically a room, a floor, or a building. - LANs are limited.
By: Joshua chambers INTRODUCTIONINTRODUCTION What your computer can do depends upon two things: the hardware your computer has, and the software that.
Step By Step Windows Server 2003 Installation Guide Step By Step Windows Server 2003 Installation Guide.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
By: Dorian Gobert. In 1998, the Gateway G6-450 was "top of the line", the fastest computer Gateway offered. Compared it to the eMachines ET in.
POW : System optimisations for data management 11 November 2004.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
CERN - IT Department CH-1211 Genève 23 Switzerland The Tier-0 Road to LHC Data Taking CPU ServersDisk ServersNetwork FabricTape Drives.
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
CBI Creative Businesses, Inc. Overview of Secondary Storage Technologies Prepared For: The Government Information Preservation Working Group December 16,
Security and Backup. Introduction A back-up strategy must cover all eventualities: Accidental damage Equipment failure Deliberate damage It must consider:
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Parts of the computer Deandre Haynes. The Case The Case This Case is the "box" or "chassis" that holds and encloses the many parts of your computer. Its.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
01. December 2004Bernd Panzer-Steindel, CERN/IT1 Tape Storage Issues Bernd Panzer-Steindel LCG Fabric Area Manager CERN/IT.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
GDB Meeting 12. January Bernd Panzer-Steindel, CERN/IT 1 Mass Storage at CERN GDB meeting, 12. January 2005.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
CERN - IT Department CH-1211 Genève 23 Switzerland CERN Tape Status Tape Operations Team IT/FIO CERN.
CASTOR at RAL in 2016 Rob Appleyard. Contents Current Status Staffing Upgrade plans Questions Conclusion.
Road map SC3 preparation
XenData SX-10 LTO Archive Appliance
Integrating Disk into Backup for Faster Restores
Tape Drive Testing IBM 3592.
DSS-G Configuration Bill Luken – April 10th , 2017
LCG Service Challenge: Planning and Milestones
NL Service Challenge Plans
Green cloud computing 2 Cs 595 Lecture 15.
Tape Drive Testing.
PC Farms & Central Data Recording
The Unbearable Slowness of Tape
Experiences and Outlook Data Preservation and Long Term Analysis
Enrico Fattibene CDG – CNAF 18/09/2017
Test C : IBM Enterprise Storage Technical Support V5
Luca dell’Agnello INFN-CNAF
The INFN Tier-1 Storage Implementation
Chapter III, Desktop Imaging Systems and Issues: Lesson II Storing Image Data
XenData SX-550 LTO Archive Servers
Special Promo Valid Until
Network Attached Storage NAS100
CASTOR: CERN’s data management system
GCSE OCR 4 Storage Computer Science J276 Unit 1
2.C Memory GCSE Computing Langley Park School for Boys.
Lee Lueking D0RACE January 17, 2002
Special Promo Valid Until
Special Promo Valid Until
Presentation transcript:

Robotics and Tape Drives Activities (and non-activities) 31 January 2006

Robotics We have 2 sites presently, 513 (large) and 613 (small) There are 3 library types: IBM 3584, STK SL8500, STK Powderhorn The largest is the oldest, the 5 plus 5 Powderhorns (60,000 slots) The newest are the fully configured IBM 3584, and the STK SL8500 These were used (to some extent) in 4Q05, and this will continue in 2006 IBM library is ~6000 slots, ~3 PB max, with IBM 3592 E05 (40) STK SL8500 is ~7500 slots, will be ~3.5 PB max when T10000s arrive (40) The Powderhorn libraries are large but only have a future with the T10000. They do not support the IBM 3592 E05 or LTO drives. They would represent a useful 30 PB, but end-of-life around 2010…..

IBM 3584

STK SL8500

513 Powderhorns, IBM 3584

613 Powderhorns

Tight(ish) squeeze….

Drives We have 5 drive types presently: IBM 3592 E05, 500 GB, ~100 MB/s, 40 units here but only ~250 cartridges. STK T10000, 2 ‘fat’ test drives, 500 GB, ~120 MB/s, and 1500 cartridges here. We expect 40 production T10000. These IBM and STK products are the newest. Both have been tested 3-4Q 05. Tests (and limited production) should resume .. when tape servers are installed .. STK drives arrive We also have 8 IBM 3592, 300 GB, ~40 MB/s in a small 3584 (to TSM..) We also have 6 HP LTO3, 300 GB, ~80 MB/s in the SL8500 system. ??. The oldest? STK 9940B, 200 GB and ~30 MB/s, are still the main ‘production’ drive (44) Retirement starts 3Q06?

Vexations We have the wrong network from disk server to tape server (Gbit ethernet) We have the wrong FC adapters (new drives will be 4 GB) We have the wrong disk servers (single 100 MB/s streams not easy) Do we still have a sensible architecture (disk server layer, tape server layer)? Next drives will be even faster, more embarrassing, and they are not that far off.. LTO-4 in 1-2Q07 Upgrades of top line drives happens every 2 years or so, 2007-8 for IBM and STK?

Is it confusion, or purpose? Complete IBM library, 40 drives, but very little media Complete SL8500 library (almost..), planned amount of media, but only test drives A lot of unused Powderhorn equipment A shortage of tape servers (hopefully soon to be remedied, but…) Purpose? The Powderhorn will support T10000 Both 3584 and SL8500 can support LTO n It was hoped to test ‘new tape layer architectures’, FC fabrics, …..

And, as we move forward …. Need to use new media, drives, libraries, tape servers for ‘new data’ ASAP We have 3,000 media, or 1.5 PB, and existing CASTOR is ~4.5 PB. Need to use new drives (both) for forthcoming Data Challenges Prepare for call for tenders for more equipment before end 2006 Install this equipment end 2006 or early 2007 … by luck, have a very wide range of options … NO candidate shown yet to be overwhelmingly good, or to have definitely failed STILL have LTO n option (3584 or SL8500) Powderhorns can carry on for longer, but need the T10000

Perhaps more difficult…. Need to improve rmcdaemon/smc IBM robot control Ignore nonsense replies (‘access geometry failure’,…), add remote ‘move media’, remove incorrect drive ‘DOWN’ Need to improve efficiency of drive use in CASTOR (or spend more) As low as 5% effective for READ, poor for WRITE This is software, where there is a distinct lack of personnel (and time) Select ‘drive and media’ for next read/write, and pre-position media, ~60s saved Writing tapes, fill to near BOT (all drives), ~90s saved Write all small files in BOT turn-around area, drives look ‘almost’ like 9840 Write small files on fastest label writing drive (IBM?), ~6s saved per file Read files in order from BOT, NOT sequence order, reads are 2-3 times as fast