LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
The LCG Service Challenges: Experiment Participation Jamie Shiers, CERN-IT-GD 4 March 2005.
Service Challenge Meeting “Service Challenge 2 update” James Casey, IT-GD, CERN IN2P3, Lyon, 15 March 2005.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Stefano Belforte INFN Trieste 1 CMS SC4 etc. July 5, 2006 CMS Service Challenge 4 and beyond.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
The LCG Service Challenges and GridPP Jamie Shiers, CERN-IT-GD 31 January 2005.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
SRM 2.2: tests and site deployment 30 th January 2007 Flavia Donno, Maarten Litmaath IT/GD, CERN.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
LHC Data Challenges and Physics Analysis Jim Shank Boston University VI DOSAR Workshop 16 Sept., 2005.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
The Worldwide LHC Computing Grid WLCG Service Ramp-Up LHCC Referees’ Meeting, January 2007.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
Author: Andrew C. Smith Abstract: LHCb's participation in LCG's Service Challenge 3 involves testing the bulk data transfer infrastructure developed to.
ATLAS Bulk Pre-stageing Tests Graeme Stewart University of Glasgow.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SC4 Planning Planning for the Initial LCG Service September 2005.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
WLCG Service Report ~~~ WLCG Management Board, 7 th July 2009.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Storage Workshop “Service Challenge 2 Review” James Casey, IT-GD, CERN CERN, 5th April 2005.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
SRM v2.2 Production Deployment SRM v2.2 production deployment at CERN now underway. – One ‘endpoint’ per LHC experiment, plus a public one (as for CASTOR2).
Service Challenge Meeting “Review of Service Challenge 1” James Casey, IT-GD, CERN RAL, 26 January 2005.
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
The Grid Storage System Deployment Working Group 6 th February 2007 Flavia Donno IT/GD, CERN.
LCG Storage Management Workshop - Goals and Timeline of SC3 Jamie Shiers, CERN-IT-GD April 2005.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
Planning the Planning for the LCG Service Challenges Jamie Shiers, CERN-IT-GD 28 January 2005.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
“A Data Movement Service for the LHC”
LCG Service Challenge: Planning and Milestones
Service Challenge 3 CERN
Database Readiness Workshop Intro & Goals
Olof Bärring LCG-LHCC Review, 22nd September 2008
The LCG Service Challenges: Ramping up the LCG Service
Summary from last MB “The MB agreed that a detailed deployment plan and a realistic time scale are required for deploying glexec with setuid mode at WLCG.
LCG Operations Centres
LCG Service Challenges Overview
LHC Data Analysis using a worldwide computing grid
LHC Tier 2 Networking BOF
The LHCb Computing Data Challenge DC06
Presentation transcript:

LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005

LCG Project, Grid Deployment Group, CERN Introduction  Reminder of where we are with the Service Challenges  Some words on mailing lists  Goals of SC2  Next talks…

LCG Project, Grid Deployment Group, CERN Mailing Lists 3 new mailing lists created:  service-challenge-man  For ‘management level’ discussions  service-challenge-tech  For ‘technical level’ discussions  service-challenge-cern  For discussions within the CERN team  service-challenge-info ??  Includes all of above?  Should we start to use these in addition to project-lcg-gdb?  (actually I have) This one is being used!

LCG Project, Grid Deployment Group, CERN 2005 Q1 - SC2 SC2 - Robust Data Transfer Challenge Set up infrastructure for 6 sites  Fermi, NIKHEF/SARA, GridKa, RAL, CNAF, CCIN2P3 Test sites individually – at least two at 500 MByte/s with CERN Agree on sustained data rates for each participating centre Goal – by end March sustained 500 Mbytes/s aggregate at CERN In parallel - serve the ATLAS “Tier0 tests” SC2 Full physics run First physics First beams cosmics

LCG Project, Grid Deployment Group, CERN SC2 Goals – Milestones document (Feb GDB) High level overview of SC2 from Milestones document:  ”Service challenge 2 extends SC1 by including at least 6 Tier1 centres, with the goal of running for one month sustained. Data rates of 100MB/s disk – disk and 50MB/s tape – tape are targeted for each participating Tier1 site with a total aggregate throughput of 500MB/s out of CERN. At least two sites should be tested at rates up to 500MB/s per site.  The sites that are expected to participate in this challenge include Fermi, NIKHEF/SARA, GridKa, RAL, CNAF, CCIN2P3, BNL, although other sites may join as part of the preparatory exercise for later challenges.” This text was based on my understanding of existing milestones from previous presentations and discussions

LCG Project, Grid Deployment Group, CERN Reactions to Milestones  Documents should be made available ~2 weeks prior to meetings to allow T1s (and others) to comment  I fully agree. Just did not have the time. Will do my best for future meetings. (They were progressively added in the days before…)  The tape – tape component of SC2 is new. Where did it come from? We have been planning based on previous presentations.  Once again, my understanding (for which I take full responsibility) of the state of planning when I landed on planet SC.  Because SC3 is so daunting, I believe that it makes sense / is necessary to include some optional components, which can also be considered “SC3 preparation”  I would propose we retain this as an optional test with sites that could be ready  But must not negatively interfere with primary goals…

LCG Project, Grid Deployment Group, CERN February Milestones M2.01Choice of data management components for SC2February GDB The data management components and their versions that will be deployed at CERN will be defined, together with the plan for acceptance tests and service deployment. CASTOR SRM (CERN, INFN), dCache SRM (FNAL, RAL, IN2P3, FZK), SARA SRM(?) + RADIANT, no file catalog(s) M2.02T0 Configuration for SC2February The hardware and network configuration to be used at CERN in SC2 should be finalized, together with the schedule for putting it in place. M2.03Choice of Tier 1 sites to participate in SC2February GDB The sites that will participate in SC2 should confirm their commitment M2.04Choice of 2 Tier 1 sites for 500MB/s for SC2February GDB The sites with which 500MB/s data transfers will be attempted should be confirmed. M2.05Plans for SC2February GDB All Tier 1 centres involved in SC2 should present their plans for the SC2 challenge, including the foreseen data rates that they expect to be able to support. The plans should also detail the data management software components and versions that will be deployed. M2.06Choice of 2 Tier 1 sites for 500MB/s for SC2February GDB The sites with which 500MB/s data transfers will be attempted should be confirmed.

LCG Project, Grid Deployment Group, CERN March Milestones M2.07Acceptance tests for Data Management s/w and configuration for SC2March GDB [ to be defined in detail ] M3.01Draft list of Tier 2 centresMarch GDB All experiments should come with a draft list of their potential Tier 2 centres with an outline of the likely network topology. Mg.01Heavy ion models and data ratesMarch GDB The required data rates between T0 and T1 sites should be presented, based on revised calculations from the relevant computing models. Mg.02Synchronization of service challenge and experiment milestonesMarch GDB The milestones for experiment-specific challenges should be synchronized with the global service challenge milestones so that the former build on the latter.

LCG Project, Grid Deployment Group, CERN Participants and Goals  A mail has been sent to all participants asking them:  To confirm their intent to participate;  To outline their preferred schedule;  To confirm the target data rates;  To understand if they could participate in additional goals:  Pushing data rate per site towards 500MB/s  Obtaining 50MB/s from tape - tape

LCG Project, Grid Deployment Group, CERN Next talks…  Site-specific plans for the participating T1s  Update on Computing Models  Plans for T2s  SC3 – draft milestones  Coordination with 3D project  Future meetings