Your university or experiment logo here Future Disk-Only Storage Project Shaun de Witt GridPP Review 20-June-2012.

Slides:



Advertisements
Similar presentations
Storage Review David Britton,21/Nov/ /03/2014 One Year Ago Time Line Apr-09 Jan-09 Oct-08 Jul-08 Apr-08 Jan-08 Oct-07 OC Data? Oversight.
Advertisements

B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
GLite Status Stephen Burke RAL GridPP 13 - Durham.
© 2009 VMware Inc. All rights reserved vCenter Site Recovery Manager 5.1.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
CASTOR Upgrade, Testing and Issues Shaun de Witt GRIDPP August 2010.
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice HP StorageWorks LeftHand update Marcus.
Copyright © 2011 SYSPRO All rights reserved. Inventory Optimization User Group 17 th August 2011.
Do Now least 1 item you have saved up for. Either in the past or you’re currently saving for. Answer in your notes.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
REVIEW OF NA61 SOFTWRE UPGRADE PROPOSAL. Mandate The NA61 experiment is contemplating to rewrite its fortran software in modern technology and are requesting.
7/2/2003Supervision & Monitoring section1 Supervision & Monitoring Organization and work plan Olof Bärring.
RLS Tier-1 Deployment James Casey, PPARC-LCG Fellow, CERN 10 th GridPP Meeting, CERN, 3 rd June 2004.
RAL Site Report Castor F2F, CERN Matthew Viljoen.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
May 8, 20071/15 VO Services Project – Status Report Gabriele Garzoglio VO Services Project – Status Report Overview and Plans May 8, 2007 Computing Division,
Your university or experiment logo here NextGen Storage Shaun de Witt (STFC) With Contributions from: James Adams, Rob Appleyard, Ian Collier, Brian Davies,
MCTS Guide to Microsoft Windows Vista Chapter 4 Managing Disks.
Operation of CASTOR at RAL Tier1 Review November 2007 Bonny Strong.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
CERN IT Department CH-1211 Genève 23 Switzerland t Tier0 Status - 1 Tier0 Status Tony Cass LCG-LHCC Referees Meeting 18 th November 2008.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Your university or experiment logo here Storage and Data Management - Background Jens Jensen, STFC.
11 March 2008 GridPP20 Collaboration meeting David Britton - University of Glasgow GridPP Status GridPP20 Collaboration Meeting, Dublin David Britton,
Your university or experiment logo here CEPH at the Tier 1 Brain Davies On behalf of James Adams, Shaun de Witt & Rob Appleyard.
1 LCG-France sites contribution to the LHC activities in 2007 A.Tsaregorodtsev, CPPM, Marseille 14 January 2008, LCG-France Direction.
NMNH EMu DAMS Integration Project Rebecca Snyder Smithsonian, NMNH.
An Agile Service Deployment Framework and its Application Quattor System Management Tool and HyperV Virtualisation applied to CASTOR Hierarchical Storage.
User Board Input Tier Storage Review 21 November 2008 Glenn Patrick Rutherford Appleton Laboratory.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
INFSO-RI Enabling Grids for E-sciencE Enabling Grids for E-sciencE Pre-GDB Storage Classes summary of discussions Flavia Donno Pre-GDB.
VMware vSphere Configuration and Management v6
Technological Barriers and Opportunities for Next-generation Employment Gregg Vanderheiden Ph.D National Employment and Disability Conference Washington.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
Your university or experiment logo here The Protocol Zoo A Site Presepective Shaun de Witt, STFC (RAL)
Emerging Technologies Understanding Deduplication Kevin Carpenter Account Manager Upstate NY Phil Benincasa System Engineer Upstate NY.
Busy Storage Services Flavia Donno CERN/IT-GS WLCG Management Board, CERN 10 March 2009.
Future Plans at RAL Tier 1 Shaun de Witt. Introduction Current Set-Up Short term plans Final Configuration How we get there… How we plan/hope/pray to.
CERN IT Department CH-1211 Genève 23 Switzerland t HEPiX Conference, ASGC, Taiwan, Oct 20-24, 2008 The CASTOR SRM2 Interface Status and plans.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
This is an example text TIMELINE PROJECT PLANNING DecOctSepAugJulyJuneAprilMarchFebJanMayNov 12 Months Example text Go ahead and replace it with your own.
Author - Title- Date - n° 1 Partner Logo WP5 Status John Gordon Budapest September 2002.
SRM-2 Road Map and CASTOR Certification Shaun de Witt 3/3/08.
SAM Database and relation with GridView Piotr Nyczyk SAM Review CERN, 2007.
INFSO-RI Enabling Grids for E-sciencE FTS failure handling Gavin McCance Service Challenge technical meeting 21 June.
ATLAS Distributed Analysis Dietrich Liko IT/GD. Overview  Some problems trying to analyze Rome data on the grid Basics Metadata Data  Activities AMI.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
Hosted by Aligning Storage Management Tools with Operational Processes Stephen Foskett GlassHouse Technologies.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
WLCG Operations Coordination report Maria Alandes, Andrea Sciabà IT-SDC On behalf of the WLCG Operations Coordination team GDB 9 th April 2014.
The VOMS and the SE in Tier2 Presenter: Sergey Dolgobrodov HEP Meeting Manchester, January 2009.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
First Experiences with Ceph on the WLCG Grid Rob Appleyard Shaun de Witt, James Adams, Brian Davies.
Lead from the front Texas Nodal 1 TDWG Nodal Update – June 6, Texas Nodal Market Implementation Server.
Storage & Database Team Activity Report INFN CNAF,
RAL Site Report HEP SYSMAN June 2016 – RAL Gareth Smith, STFC-RAL With thanks to Martin Bly, STFC-RAL.
J Jensen / WP5 /RAL UCL 4/5 March 2004 GridPP / DataGrid wrap-up Mass Storage Management J Jensen
System Architecture Issues
CASTOR-SRM Status GridPP NeSC SRM workshop
PanDA in a Federated Environment
ctclink Steering Committee
Vembu ImageBackup - Free edition
2009 TIMELINE PROJECT PLANNING 12 Months Example text Jan Feb March
Using an Object Oriented Database to Store BaBar's Terabytes
Traditional Virtualized Infrastructure
2009 TIMELINE PROJECT PLANNING 12 Months Example text Jan Feb March
Presentation transcript:

Your university or experiment logo here Future Disk-Only Storage Project Shaun de Witt GridPP Review 20-June-2012

Your university or experiment logo here Motivation Always good to be aware of options But now in particular: –Castor no longer used for disk only storage at CERN There is some ‘risk’ around future support Starting to hit operational limits Castor as we use it is constraining storage purchases –Many new options maturing

Your university or experiment logo here Motivation CASTOR Issues –Flexibility Minimum deployment size = 1 disk server Cannot use very large storage servers (> ~40TB) –Operational Limitations: Slow access times –Lowers job efficiencies, time consuming to drain servers Not fault tolerant Database is SPF No hotfile replication limits performance –Complex to administer – requires expertise & considerable staff effort –Licensing costs (Oracle)

Your university or experiment logo here Pitfalls CONSIDERATIONS Any replacement: –must be simple and require less operational manpower than existing solution –Should be more fault tolerant –Should have wider user base, long term support (open source?) –Should perform better Minimal effort for development –Can’t write another SRM front end –But can contribute to development Watch out for ‘hidden’ licensing costs

Your university or experiment logo here Status and Next Steps Initial set of requirements gathered Candidates identified Undergoing paper review Select 4 for further testing –Some tests already available from another project –Need to spend more effort on testing –Rank solutions Deploy small scale preproduction set-up for internal testing Open to VOs for testing Deploy into production –Architecture will be based on final solution

Your university or experiment logo here Timeline Shortlist candidate technologies –July 2012 Set up test beds –Aug Complete internal testing and report –Dec 2012 Deploy preproduction service –Feb 2013 Open for VO testing –Mar 2013 Deploy for production –Oct 2013

Your university or experiment logo here Risks and Limitations Limitations: –No plan currently to migrate data in CASTOR disk to new service Either let VOs do it or let disk-only files ‘age out’ of CASTOR –Need an SRM until new version of FTS is available Storm/Bestman may be used if we don’t choose a HEP solution Risks: –First choice selection may show problems during VO testing Rank solutions, use experienced gained in setting up test systems to deploy second choice system Some slippage built in to plan, but not much –Getting glue information to CIP is difficult Need to see what others do and how they do it