HEPiX FSWG Progress Report Andrei Maslennikov Michel Jouvin GDB, May 2 nd, 2007.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

Ian Bird LCG Project Leader LHCC + C-RSG review. 2 Review of WLCG  To be held Feb 16 at CERN  LHCC Reviewers:  Amber Boehnlein  Chris.
HEPiX Meeting Wrap Up Fall 2000 JLab. Meeting Highlights Monitoring –Several projects underway –Collaboration of ideas occurred –Communication earlier.
Password?. Project CLASP: Common Login and Access rights across Services Plan
Password?. Project CLASP: Common Login and Access rights across Services Plan
Storage Task Force Intermediate pre report. History GridKa Technical advisory board needs storage numbers: Assemble a team of experts. 04/05 At HEPiX.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
HEPiX Catania 19 th April 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 19 th April 2002 HEPiX 2002, Catania.
HEPiX Orsay 27 th April 2001 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 27 th April 2001 HEPiX 2001, Orsay.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CERN IT Department CH-1211 Genève 23 Switzerland t Plans and Architectural Options for Physics Data Analysis at CERN D. Duellmann, A. Pace.
News from the HEPiX IPv6 Working Group David Kelsey (STFC-RAL) WLCG GDB, CERN 8 July 2015.
The production deployment of IPv6 on WLCG David Kelsey (STFC-RAL) CHEP2015, OIST, Okinawa 16 Apr 2015.
How to start Milestone 1 CSSE 371 Project Info There are only 8 easy steps…
John Gordon STFC-RAL Tier1 Status 9 th July, 2008 Grid Deployment Board.
Meeting Wrap-up Spring 2012 FZU Prague Michel Jouvin / Sandy Philpott.
15-Apr-1999D.P.Kelsey - HEPNT update - HEPiX/RAL1 HEPNT an update David Kelsey CLRC Rutherford Appleton Lab, UK rl.ac.uk
3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
Alberto Aimar CERN – LCG1 Reliability Reports – May 2007
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
CERN IT Department CH-1211 Genève 23 Switzerland t IHEPCCC/HEPiX benchmarking WG Helge Meinhard / CERN-IT LCG Management Board 11 December.
SC4 Planning Planning for the Initial LCG Service September 2005.
HEPiX 2 nd Nov 2000 Alan Silverman Proposal to form a Large Cluster SIG Alan Silverman 2 nd Nov 2000 HEPiX – Jefferson Lab.
HEPiX report Helge Meinhard, Harry Renshall / CERN-IT Computing Seminar / After-C5 27 May 2005.
Evolution of storage and data management Ian Bird GDB: 12 th May 2010.
Thomas Kern | The system documentation as binding agent for and in between internal and external customers April 24th, 2009 | Page 1 The system documentation.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
HEPIX Backup Survey David Asbury CERN/IT/FIO HEPIX, Rome, 6 April 2006.
HEPiX Fall 2009 Highlights Michel Jouvin LAL, Orsay November 10, 2009 GDB, CERN.
Randall Sobie University of Victoria IHEPCCC - HEPiX April International HEP Computing Coordination Committee Randall Sobie.
23 January 2007WLCG workshop, CERN System Management Working Group Alessandra Forti WLCG workshop CERN, 23 January 2007.
16-Nov-01D.P.Kelsey, HTASC report1 HTASC - Report to HEP-CCC David Kelsey, RAL rl.ac.uk 16 November 2001, CERN ( )
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
INFSO-RI SA2 ETICS2 first Review Valerio Venturi INFN Bruxelles, 3 April 2009 Infrastructure Support.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
CernVM-FS Infrastructure for EGI VOs Catalin Condurache - STFC RAL Tier1 EGI Webinar, 5 September 2013.
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
15 March 2000Manuel Delfino / CERN IT Division / Mass Storage Management1 Mass Storage Management improvised report for LHC Computing Review Software Panel.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
The HEPiX IPv6 Working Group David Kelsey HEPiX, Prague 26 April 2012.
EMI INFSO-RI Testbed for project continuous Integration Danilo Dongiovanni (INFN-CNAF) -SA2.6 Task Leader Jozef Cernak(UPJŠ, Kosice, Slovakia)
The HEPiX IPv6 Working Group David Kelsey (STFC-RAL) EGI OMB 19 Dec 2013.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
WLCG Operations Coordination report Maria Dimou Andrea Sciabà IT/SDC On behalf of the WLCG Operations Coordination team GDB 12 th November 2014.
HEPiX FSWG Progress Report (Phase 2) Andrei Maslennikov November 2007 – St. Louis.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
HEPiX spring 2013 report HEPiX Spring 2013 CNAF Bologna / Italy Helge Meinhard, CERN-IT Contributions by Arne Wiebalck / CERN-IT Grid Deployment Board.
Alice Operations In France
T0-T1 Networking Meeting 16th June Meeting
HEPiX FSWG – Final Report
LHC[OPN/ONE]  IPv6  status
Computing Operations Roadmap
Working Group 4 Facilities and Technologies
gLite->EMI2/UMD2 transition
Proposal for obtaining installed capacity
Update from the HEPiX IPv6 WG
Summary from last MB “The MB agreed that a detailed deployment plan and a realistic time scale are required for deploying glexec with setuid mode at WLCG.
LHC Data Analysis using a worldwide computing grid
HEPiX Spring 2009 Highlights
Presentation transcript:

HEPiX FSWG Progress Report Andrei Maslennikov Michel Jouvin GDB, May 2 nd, 2007

2/5/2007HEPiX FS WG Status2 Outline Mandate and work plan First achievements Next milestones

2/5/2007HEPiX FS WG Status3 Credits This presentation is based on Andrei Maslenikov’s presentation during last HEPiX meeting (DESY, April 25th) Full (detailed) presentation available at nId=39&materialId=slides&confId=257 nId=39&materialId=slides&confId=257

2/5/2007HEPiX FS WG Status4 WG Mandate The group was commissioned by IHEPCCC in the end of 2006 under umbrella of HEPiX  Chairman : Andrei Maslenikov Officially supported by the HEP IT managers The goal is to review the available file system solutions and storage access methods, and to divulge the know-how among HEP organizations and beyond  Not focused on grid Timescale: Feb 2007 – April 2008 Milestones: 2 progress reports (HEPiX Spring 2007, Fall 2007), 1 final report (HEPiX Spring 2008)

2/5/2007HEPiX FS WG Status5 Members Currently we have 20 people on the list, but only these 15 appeared in the meetings/conf. calls and did something since the group had started: BNL R.Petkus CASPUR A.Maslennikov (Chair), M.Calori (Web Master) CEA J-C.Lafoucriere CERN B.Panzer-Steindel DESY M.Gasthuber, P.van der Reest FZK J.van Wezel, S.Meier IN2P3 L.Tortay INFN V. Sapunenko LAL M.Jouvin NERSC/LBL C.Whitney RZG H.Reuter U.Edinburgh G.A.Cowan These very people maintain contacts with several other important labs like LLNL, SLAC, JLAB, DKRZ, PNL and others.

2/5/2007HEPiX FS WG Status6 Work Plan The work plan for the group was discussed and agreed upon during the first two meetings. Accent will be made on shared / distributed file systems. We start with an Assessment of the existing file system / data access solutions; at this stage we will be trying to classify the storage use cases Next, in the course of the Analysis stage we will try get a better idea of the requirements for each of the classes defined during the previous stage This will be followed by the selection of the viable Candidate Solutions for each of the storage classes, followed by a possible Evaluation of some of them on the common hardware Then the Final Report with conclusions and practical recommendations will be due, by the Spring 2008 HEPiX meeting

2/5/2007HEPiX FS WG Status7 Assessment progress Prepared an online questionnaire on deployed file stores  (hepix/hepix) Selected 21 important sites to be covered: Tier-0, all Tier-1 plus several large labs/orgs like CEA, LLNL, DKRZ All selected sites were invited to fill the questionnaire for their most relevant file store solutions; at least two areas had to be covered: home directories and the largest available shared filestore

2/5/2007HEPiX FS WG Status8 Sites under assessment ASGC BNL CC-IN2P3 CEA CERN CNAF DAPNIA DESY DKRZ FNAL FZK JLAB LLNL NERSC Netherlands LHC NDGF PIC PNL RAL RZG SLAC TRIUMF - Collected / being verified - Being collected - NO INFO / NO CONTACT

2/5/2007HEPiX FS WG Status9 Initial Observations The big picture looks a bit chaotic, the reasons to choose this or that storage access platform are often not clear The data collected are yet to be verified! Some of the numbers provided by the local “info collectors” appear to be unprecise. Moreover, in several cases some fields of the questionnaire were interpreted in different ways by different info collectors. We hence scheduled an effort to clean this up (“normalize”), and to see if the questions should be made in a better form. So far we were only able to make a pair of intermediate plots on the basis of data collected over 14 sites out of planned 21, but already these partial infos could tell us something. We only looked at the online disk areas. The slow tier (tape backend) has to be studied seprately, and we still miss plenty of data. The total area size online reported is large but may not be called very impressive: 13.7 PB over all sites including the non-HEP organizations (compare with the planned PB/year for LHC production).

2/5/2007HEPiX FS WG Status11 File systems / data access solutions in use Please note that we are still *very* far away from any conclusions! However, here are some facts and thoughts:  The large initial list of candidate solutions may probably be reduced to just 7 names: Lustre, GPFS, HPSS, CASTOR, dCache, AFS and NFS  AFS and NFS are mostly used for home directories and software repositories and remain very popular  Solutions with the HSM function (HPSS, CASTOR and dCache) have similar deployed base in petabytes, and probably have to be compared  GPFS and Lustre dominate in the field of distributed file systems and deserve to be compared  Lustre has the largest reported installed base (5.5 PB), but not a single HEP organization had ever deployed it !  dCache is present in many HEP sites, however CASTOR alone stores more data than all reported dCache areas (NB: we miss data from FNAL)

2/5/2007HEPiX FS WG Status13 Tentative plan until September 2007 Continue with the data collection and analysis  Complete the questionnaire by the Fall 2007  Report during the meeting at St Louis Reduce the list of solutions to 7 names and create three mini task forces:  On home directories / software repositories: AFS, NFS, GPFS(?)  On data access solutions with the tape backend: CASTOR, dCache, HPSS  On scalable high performance distributed file systems: Lustre, GPFS Each of the task forces will have a goal to prepare an exhaustive collection of documentation on the corresponding solutions, describe best practices, provide deployment advice, cost estimates and performance benchmarks  All task forces will have to present an interim progress report during the St Louis meeting

2/5/2007HEPiX FS WG Status14 Some input for discussion This workgroup is open to all sites (HEP- and non-) interested in the storage issues. We appreciate any feedback and would welcome any new active members We appeal to FNAL to join us actively (or at least to provide their data on storage, otherwise our report will not be complete) Our web site ( is open to all universities, research labs and organizations. The access to it is protected by a symbolic password which was widely circulated among HEPiX members and may at any time be obtained via mail. Just send your request to monica.calori at caspur.ithttp://hepix.caspur.it/storage