XRootD & ROOT Considered Root Workshop Saas-Fee September 15-18, 2015 Andrew Hanushevsky, SLAC

Slides:



Advertisements
Similar presentations
Potential Data Access Architectures using xrootd OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
Advertisements

System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
XRootD Release 4 And Beyond GSI Seminar Stanford University/SLAC July15, 2015 Andrew Hanushevsky, SLAC
Xrootd Roadmap Atlas Tier 3 Meeting University of Chicago September 12-13, 2011 Andrew Hanushevsky, SLAC
Distributed Xrootd Derek Weitzel & Brian Bockelman.
Xrootd Update OSG All Hands Meeting University of Nebraska March 19-23, 2012 Andrew Hanushevsky, SLAC
Adding scalability to legacy PHP web applications Overview Mario A. Valdez-Ramirez.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
June 21, PROOF - Parallel ROOT Facility Maarten Ballintijn, Rene Brun, Fons Rademakers, Gunter Roland Bring the KB to the PB.
O. Keeble, F. Furano, A. Manzi, A. Ayllon, I. Calvet, M. Hellmich DPM Workshop 2014 DPM Monitoring.
Panel Summary Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University XLDB 23-October-07.
Outline Network related issues and thinking for FAX Cost among sites, who has problems Analytics of FAX meta data, what are the problems  The main object.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
Experiences Deploying Xrootd at RAL Chris Brew (RAL)
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
Filesytems and file access Wahid Bhimji University of Edinburgh, Sam Skipsey, Chris Walker …. Apr-101Wahid Bhimji – Files access.
Scalla Back Through The Future Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 8-April-10
XRootD Roadmap To Start The Second Decade Root Workshop Saas-Fee March 11-14, 2013 Andrew Hanushevsky, SLAC
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
File System Access (XRootd) Andrew Hanushevsky Stanford Linear Accelerator Center 13-Jan-03.
The Next Generation Root File Server Andrew Hanushevsky Stanford Linear Accelerator Center 27-September-2004
Scalla/xrootd Andrew Hanushevsky SLAC National Accelerator Laboratory Stanford University 19-August-2009 Atlas Tier 2/3 Meeting
Xrootd Demonstrator Infrastructure OSG All Hands Meeting Harvard University March 7-11, 2011 Andrew Hanushevsky, SLAC
Take on messages from Lecture 1 LHC Computing has been well sized to handle the production and analysis needs of LHC (very high data rates and throughputs)
Multi-Tiered Storage with Xrootd at ATLAS Western Tier 2 Andrew Hanushevsky Wei Yang SLAC National Accelerator Laboratory 1CHEP2012, New York
Scalla/xrootd Introduction Andrew Hanushevsky, SLAC SLAC National Accelerator Laboratory Stanford University 6-April-09 ATLAS Western Tier 2 User’s Forum.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Xrootd Monitoring Atlas Software Week CERN November 27 – December 3, 2010 Andrew Hanushevsky, SLAC.
Data and Storage Evolution in Run 2 Wahid Bhimji Contributions / conversations / s with many e.g.: Brian Bockelman. Simone Campana, Philippe Charpentier,
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Xrootd Update Andrew Hanushevsky Stanford Linear Accelerator Center 15-Feb-05
RAL Site Report Castor Face-to-Face meeting September 2014 Rob Appleyard, Shaun de Witt, Juan Sierra.
CERN IT Department CH-1211 Geneva 23 Switzerland GT WG on Storage Federations First introduction Fabrizio Furano
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
INFORMATION SYSTEM-SOFTWARE Topic: OPERATING SYSTEM CONCEPTS.
Efi.uchicago.edu ci.uchicago.edu FAX status developments performance future Rob Gardner Yang Wei Andrew Hanushevsky Ilija Vukotic.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Accelerating Debugging In A Highly Distributed Environment CHEP 2015 OIST Okinawa, Japan April 28, 2015 Andrew Hanushevsky, SLAC
Performance and Scalability of xrootd Andrew Hanushevsky (SLAC), Wilko Kroeger (SLAC), Bill Weeks (SLAC), Fabrizio Furano (INFN/Padova), Gerardo Ganis.
Xrootd Present & Future The Drama Continues Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University HEPiX 13-October-05
Xrootd Update Alice Tier 1/2 Workshop Karlsruhe Institute of Technology (KIT) January 24-26, 2012 Andrew Hanushevsky, SLAC
Evaluating distributed EOS installation in Russian Academic Cloud for LHC experiments A.Kiryanov 1, A.Klimentov 2, A.Zarochentsev 3. 1.Petersburg Nuclear.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
Scalla Advancements xrootd /cmsd (f.k.a. olbd) Fabrizio Furano CERN – IT/PSS Andrew Hanushevsky Stanford Linear Accelerator Center US Atlas Tier 2/3 Workshop.
Scalla Authorization xrootd /cmsd Andrew Hanushevsky SLAC National Accelerator Laboratory CERN Seminar 10-November-08
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Xrootd Proxy Service Andrew Hanushevsky Heinz Stockinger Stanford Linear Accelerator Center SAG September-04
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
PROOF tests at BNL Sergey Panitkin, Robert Petkus, Ofer Rind BNL May 28, 2008 Ann Arbor, MI.
Scalla As a Full-Fledged LHC Grid SE Wei Yang, SLAC Andrew Hanushevsky, SLAC Alex Sims, LBNL Fabrizio Furano, CERN SLAC National Accelerator Laboratory.
Scalla + Castor2 Andrew Hanushevsky Stanford Linear Accelerator Center Stanford University 27-March-07 Root Workshop Castor2/xrootd.
Evaluating the performance of Seagate Kinetic Drives Technology and its integration with the CERN EOS storage system Ivana Pejeva openlab Summer Student.
Federated Data Stores Volume, Velocity & Variety Future of Big Data Management Workshop Imperial College London June 27-28, 2013 Andrew Hanushevsky, SLAC.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Introduce Caching Technologies using Xrootd Wei Yang 10/28/14ATLAS TIM 2014 Univ. Chicago1.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
DCache/XRootD Dmitry Litvintsev (DMS/DMD) FIFE workshop1Dmitry Litvintsev.
Andrea Manzi CERN EGI Conference on Challenges and Solutions for Big Data Processing on cloud 24/09/2014 Storage Management Overview 1 24/09/2014.
XRootD Monitoring Report A.Beche D.Giordano. Outlines  Talk 1: XRootD Monitoring Dashboard  Context  Dataflow and deployment model  Database: storage.
An Analysis of Data Access Methods within WLCG Shaun de Witt, Andrew Lahiff (STFC)
Packet processed storage in a software defined world Ash Young fd.io Foundation1.
SLACFederated Storage Workshop Summary Andrew Hanushevsky SLAC National Accelerator Laboratory April 10-11, 2014 SLAC.
Efi.uchicago.edu ci.uchicago.edu Caching FAX accesses Ilija Vukotic ADC TIM - Chicago October 28, 2014.
Data Federation with Xrootd Wei Yang US ATLAS Computing Facility meeting Southern Methodist University, Oct 11-12, 2011.
CTA: CERN Tape Archive Overview and architecture
Data Management cluster summary
Support for ”interactive batch”
Summary of the dCache workshop
Presentation transcript:

XRootD & ROOT Considered Root Workshop Saas-Fee September 15-18, 2015 Andrew Hanushevsky, SLAC

September 18, 20152Root Workshop XRootD Brief History of XRootD 1997 – Objectivity, Inc. collaboration Design & Development to scale Objectivity/DB First attempt to use commercial DB for Physics data Successful but problematical 2001 – BaBar decides to use root framework vs Objectivity Collaboration with INFN, Padova & SLAC created Design & develop high performance data access system Work based on what we learned with Objectivity XRootD 2003 – First deployment of XRootD system at SLAC 2015 – Wide deployment with several implementations ALICE, ATLAS, CMS, EXO, Fermi, LSST; among others Protocol available in dCache, DPM, and EOS

September 18, 20153Root Workshop XRootD Why the Name XRootD? Spurred by the pre-packaged rootd daemon ROOT team planned to upgrade it We wanted to avoid duplication Easier to get people to use a familiar name We tried rebranding in 2008 SALL ASCALLA Structured Cluster Architecture for Low Latency Access (SCALLA) Total failure!

September 18, 20154Root Workshop ROOT Related Enhancements Retrospectively, the two important ones… Server-Side Vector Reads Client-Side Cross protocol redirects

September 18, 20155Root Workshop Vector Reads Developed by Rene Brun’s team XRootD Work added to XRootD but with a twist! Can read blocks from multiple files at once Feature never used by ROOT I/O Probably never will be used In some ways we wish we had not added the twist All that said… Many applications still don’t use vector reads! Due to not using TTreeCache And it’s been there for a very long time 

September 18, 20156Root Workshop Vector Reads Enhancements Enhanced over the years Storage system level vector reads added Prior code unrolled vector reads before sending the request to the storage system Developed by Brian Bockelman, UNL Vector read proxy pass-through Proxy server performance improvement

September 18, 20157Root Workshop Cross Protocol Redirects Developed by Lukasz Janyst Allows ROOT I/O to switch protocols Can improve worker node clustered storage I/O E.G. xroot://->file:// Can support multi-protocol federations E.G. xroot:// -> Currently only the xroot client can do this But it’s simple for any protocol to do so We hope all multi-protocol capable plug-ins will add it Available in ROOT 6.04.x

September 18, 20158Root Workshop XRootD ROOT I/O & XRootD I/O XRootD XRootD can only do so much It already efficiently handles most ROOT I/O ROOT I/O adopted many tuning knobs Optimize baskets, autoflush, learning, TTreeCache, streaming, spliting, etc It’s complicated and not easy to optimize Once optimized it’s specific to an analysis mode Change the mode and you may be in trouble!

September 18, 20159Root Workshop Why All The Optimizations? It’s mostly due to spinning disks! File organization & I/O requests are very sensitive to high latency devices It’s also due to network I/O Small reads suffer high latency Especially on the WAN Network issues will be always with us Unless “entangled” networks come to pass

September 18, Root Workshop Are We Too Focused on HDD? Low HDD $/TB prices are still dropping But not in a good way! HDD is becoming more dense, so … Cost per terabyte is vastly decreasing! But HDD’s are not getting any faster They are becoming slower in many respects Shingled drives come to mind We may be too mesmerized by the “deal”

September 18, Root Workshop Need a Change in Hardware!

September 18, Root Workshop The Cost Factor Is Still Large This is what EMC thinks But the ratio is decreasing somewhat faster

September 18, Root Workshop HDD Apples to SSD Oranges! HD’s getting more dense but not faster Approaching archive use utility! power heat HD’s use more power and emit more heat Electricity is not getting cheaper

September 18, Root Workshop The ROOT I/O SSD Challenge If by 2018 SSD’s become active storage Either in a hierarchy or primary storage ROOT I/O may be insufficient Object layout & access algorithms HD-oriented SSD’s have their own peculiarities For example, large page read-out size Time to start rethinking ROOT I/O! How to get the most out of SSD’s

September 18, Root Workshop XRootD XRootD & SSD XRootD XRootD already is SSD ready Already supports tiered storage (i.e. SSD+HDD) Used by SLAC for the ATLAS Tier 2 Reasonable approach until SSD prices drop HD rival is estimated by 2020 Based on improvements to 3D NAND technology So, 2020 may mean primarily SSD access Will ROOT IO be ready?

September 18, Root Workshop An Orthogonal Approach SSI Developing a Scalable Service Interface XRootD Leverage XRootD features for services E.G. Clustered mySQL for LSST queries Testing a 150 node cluster at scale! SSIXRootD SSI runs in parallel with XRootD data service It’s an optional plug-in for increased flexibility What is it? LORB LORB Essentially a LORB (Lightweight Object Request Broker)

September 18, Root Workshop LORB The 10,000 KM LORB View XrdSsiGetClientService() XrdSsiService Provision() XrdSsiSession ProcessRequest() XrdSsiRequest ProcessResponse() Client-Side Processing Flow Three objects (simplicity) Service, Session, and Request objects Client changes object Reflected at the server Server changes object Reflected at the client Object Brokering XRootD Uses XRootD technology Client redirected to the server than can handle the object (provision) SSI objects and interactions have been abstracted to be compatible with most other protocols XrdSsiGetServerService() XrdSsiService Provision() Server-Side Processing Flow XrdSsiResponder SetResponse() XrdSsiSession ProcessRequest()

September 18, Root Workshop What Can You Do With This? Implement arbitrary clustered services Provide ROOT IO flexibility I/O can be optimized out of channel Dependent on what the client is doing Provide PROOF a lot more flexibility Optimized event delivery The possibilities are endless LSST is at the forefront in leveraging SSI Necessary given their real-time requirements

September 18, Root Workshop Conclusion XRootD XRootD is under active development Always looking for new ideas Feel free to suggest them Be a contributor – it’s open source Available on github! XRootD Consider joining the XRootD collaboration It costs no money to join See more at

September 18, Root Workshop Acknowledgements Current Software Contributors ATLAS: Doug Benjamin, Ilija Vukotic CERN: Andreas Peters, Sebastien Ponce, Michal Simon, Elvin Sindrilaru Fermi: Tony Johnson Root: Gerri Ganis SLAC: Andrew Hanushevsky, Wilko Kroeger, Daniel Wang, Wei Yang UCSD: Alja Mrak-Tadel, Matevz Tadel UNL: Brian Bockelman WLCG: Mattias Ellert, Fabrizio Furano, David Smith US Department of Energy Contract DE-AC02-76SF00515 with Stanford University