LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.

Slides:



Advertisements
Similar presentations
Storage Workshop Summary Wahid Bhimji University Of Edinburgh On behalf all of the participants…
Advertisements

HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
LCG ** * * * * * * * * * * LCG Service Challenges: Status and Plans –
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Workshop Summary (my impressions at least) Dirk Duellmann, CERN IT LCG Database Deployment & Persistency Workshop.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
3D Workshop Outline & Goals Dirk Düllmann, CERN IT More details at
Optimisation of Grid Enabled Storage at Small Sites Jamie K. Ferguson University of Glasgow – Jamie K. Ferguson – University.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
DDM Monitoring David Cameron Pedro Salgado Ricardo Rocha.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
INFSO-RI Enabling Grids for E-sciencE gLite Data Management and Interoperability Peter Kunszt (JRA1 DM Cluster) 2 nd EGEE Conference,
WLCG Grid Deployment Board, CERN 11 June 2008 Storage Update Flavia Donno CERN/IT.
1 LHCb File Transfer framework N. Brook, Ph. Charpentier, A.Tsaregorodtsev LCG Storage Management Workshop, 6 April 2005, CERN.
Rolling out the LCG Service The LCG Service Challenges: Rolling out the LCG Service Jamie Shiers, CERN-IT-GD-SC
SC4 Planning Planning for the Initial LCG Service September 2005.
Site Validation Session Report Co-Chairs: Piotr Nyczyk, CERN IT/GD Leigh Grundhoefer, IU / OSG Notes from Judy Novak WLCG-OSG-EGEE Workshop CERN, June.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
EGEE is a project funded by the European Union under contract IST VO box: Experiment requirements and LCG prototype Operations.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
LCG Accounting Update John Gordon, CCLRC-RAL WLCG Workshop, CERN 24/1/2007 LCG.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
LCG Service Challenges: Progress Since The Last One –
LCG Storage Management Workshop - Goals and Timeline of SC3 Jamie Shiers, CERN-IT-GD April 2005.
WLCG Service Report ~~~ WLCG Management Board, 17 th February 2009.
Status of gLite-3.0 deployment and uptake Ian Bird CERN IT LCG-LHCC Referees Meeting 29 th January 2007.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
Ramping up the LCG Service The LCG Service Challenges: Ramping up the LCG Service Jamie Shiers, CERN-IT-GD-SC April 2005 LCG.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
LCG Accounting Update John Gordon, CCLRC-RAL 10/1/2007.
INFSO-RI Enabling Grids for E-sciencE File Transfer Service Patricia Mendez Lorenzo CERN (IT-GD) / CNAF Tier 2 INFN - SC3 Meeting.
LHCC meeting – Feb’06 1 SC3 - Experiments’ Experiences Nick Brook In chronological order: ALICE CMS LHCb ATLAS.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
Baseline Services Group Status of File Transfer Service discussions Storage Management Workshop 6 th April 2005 Ian Bird IT/GD.
Road map SC3 preparation
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
LCG Service Challenge: Planning and Milestones
3D Application Tests Application test proposals
Data Challenge with the Grid in ATLAS
Database Readiness Workshop Intro & Goals
CMS — Service Challenge 3 Requirements and Objectives
The LCG Service Challenges: Ramping up the LCG Service
Workshop Summary Dirk Duellmann.
Summary of Service Challenges
LCG Service Challenges Overview
LHC Data Analysis using a worldwide computing grid
The LHCb Computing Data Challenge DC06
Presentation transcript:

LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN

HEPiX Meeting, GridKA, May 2005 Executive Summary Tier2 issues have been discussed extensively since early this year The role of Tier2s, the services they offer – and require – has been clarified The data rates for MC data are expected to be rather low (limited by available CPU resources) The data rates for analysis data depend heavily on analysis model (and feasibility of producing new analysis datasets IMHO) LCG needs to provide: Installation guide / tutorials for DPM, FTS, LFC Tier1s need to assist Tier2s in establishing services

Tier2 and Base S/W Components 1)Disk Pool Manager (of some flavour…)  e.g. dCache, DPM, … 2)gLite FTS client (and T1 services) 3)Possibly also local catalog, e.g. LFC, FiReMan, … 4)Experiment-specific s/w and services ( ‘agents’ ) 1 – 3 will be bundled with LCG release. Experiment-specific s/w will not… [ N.B. we are talking interfaces and not implementation ]

LCG Service Challenges – Deploying the Service Tier2s and SC3  Initial goal is for a small number of Tier2-Tier1 partnerships to setup agreed services and gain experience  This will be input to a wider deployment model  Need to test transfers in both directions:  MC upload  Analysis data download  Focus is on service rather than “throughput tests”  As initial goal, would propose running transfers over at least several days  e.g. using 1GB files, show sustained rates of ~3 files / hour T2->T1  More concrete goals for the Service Phase will be defined together with experiments in the coming weeks  Definitely no later than June workshop

LCG Service Challenges – Deploying the Service Draft List of (Database) Services  Following table shows current list of services required by experiments for SC3  Current assumptions: Oracle at T0/T1; MySQL at T2  It does not include experiment-specific applications / services such as CMS’ PubDB  This list is currently being confirmed with the experiments  Next step: confirm data volumes & rates, understand resource implications and make first proposal for schedule for Service Phase ComponentExperimentsTier0Tier1Tier2 LFCExcept ALICECentral catalogN/A FiReManExcept ALICECentral catalogN/A FTSExcept CMSClient+Server Client SRM 1.1AllYes COOLATLASCentral DBN/A

LCG Service Challenges – Deploying the Service Initial Tier-2 sites  For SC3 we aim for  Training in UK this Friday (13 th ) and in Italy end May (26-27)  Other interested parties: Prague, Warsaw, Moscow,..  Addressing larger scale problem via national / regional bodies  GridPP, INFN, HEPiX, US-ATLAS, US-CMS, Triumf (Canada) SiteTier1Experiment Bari, ItalyCNAF, ItalyCMS Padova, ItalyCNAF, ItalyCMS, (Alice) Turin, ItalyCNAF, ItalyAlice DESY, GermanyFZK, GermanyATLAS, CMS Lancaster, UKRAL, UKATLAS London, UKRAL, UKCMS ScotGrid, UKRAL, UKLHCb US Tier2sBNL / FNALATLAS / CMS

LCG Service Challenges – Deploying the Service T2s – Concrete Target  We need a small number of well identified T2/T1 partners for SC3 as listed above  Initial target of end-May is not realistic, but not strictly necessary either…  Need prototype service in at least two countries by end-June  Do not strongly couple T2-T1 transfers to T0-T1 throughput goals of SC3 setup phase  Nevertheless, target one week of reliable transfers T2- >T1 involving at least two T1 sites each with at least two T2s by end July 2005