The LCG Service Challenges and GridPP Jamie Shiers, CERN-IT-GD 31 January 2005.

Slides:



Advertisements
Similar presentations
Bernd Panzer-Steindel, CERN/IT WAN RAW/ESD Data Distribution for LHC.
Advertisements

The LCG Service Challenges: Experiment Participation Jamie Shiers, CERN-IT-GD 4 March 2005.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
CCRC’08 Weekly Update Jamie Shiers ~~~ LCG MB, 1 st April 2008.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
Tony Doyle - University of Glasgow 8 July 2005Collaboration Board Meeting GridPP Report Tony Doyle.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
SC4 Planning Planning for the Initial LCG Service September 2005.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
WLCG Service Report ~~~ WLCG Management Board, 16 th September 2008 Minutes from daily meetings.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Service Challenge Meeting “Review of Service Challenge 1” James Casey, IT-GD, CERN RAL, 26 January 2005.
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
8 August 2006MB Report on Status and Progress of SC4 activities 1 MB (Snapshot) Report on Status and Progress of SC4 activities A weekly report is gathered.
HEPiX Rome – April 2006 The High Energy Data Pump A Survey of State-of-the-Art Hardware & Software Solutions Martin Gasthuber / DESY Graeme Stewart / Glasgow.
LCG Storage Management Workshop - Goals and Timeline of SC3 Jamie Shiers, CERN-IT-GD April 2005.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
LCG LHC Grid Deployment Board Regional Centers Phase II Resource Planning Service Challenges LHCC Comprehensive Review November 2004 Kors Bos, GDB.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Planning the Planning for the LCG Service Challenges Jamie Shiers, CERN-IT-GD 28 January 2005.
Summary of LHC Computing Models … and more … Jamie Shiers, CERN-IT-GD 27 January 2005.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
17 September 2004David Foster CERN IT-CS 1 Network Planning September 2004 David Foster Networks and Communications Systems Group Leader
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
“A Data Movement Service for the LHC”
Database Readiness Workshop Intro & Goals
Olof Bärring LCG-LHCC Review, 22nd September 2008
WLCG Service Interventions
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
LCG Service Challenges Overview
LHC Data Analysis using a worldwide computing grid
Collaboration Board Meeting
ATLAS DC2 & Continuous production
Presentation transcript:

The LCG Service Challenges and GridPP Jamie Shiers, CERN-IT-GD 31 January 2005

LCG Project, Grid Deployment Group, CERN Agenda  Reminder of the Goals and Timelines of the LCG Service Challenges  Summary of LHC Experiments’ Computing Models  Outline of Service Challenges  Review of SC1  Status of SC2  Plans for SC3 and beyond

LCG Project, Grid Deployment Group, CERN Who am I?  First came to CERN: 1978 – NA5  Ian Bird was working on NA2 at that time  1981: moved to Max Planck Munich and NA9  NA2++ plus streamer chamber as on NA5  1984: joined CERN IT (then DD) in equivalent of FIO  VAX services, used for Oracle, CAD/CAM for LEP, interactive VMS  1989: moved to US group – File Catalog for LEP  + conditions DB, ZFTP, CHEOPS, …  1994+: proposals for OO projects for LHC  RD45 (led to Objectivity/DB), LHC++ (Anaphe)  2000: IT-DB group  Phase out of Objy services; new Oracle contract, openlab, Oracle 9i and 10g  2005: IT-GD  2008: ?

LCG Project, Grid Deployment Group, CERN Why am I here? As we will see in more detail later:  Summer 2005: SC3 - include 2 Tier2s; progressively add more  Summer / Fall 2006: SC4 complete  SC4 – full computing model services  Tier-0, ALL Tier-1s, all major Tier-2s operational at full target data rates (~1.8 GB/sec at Tier-0)  acquisition - reconstruction - recording – distribution, PLUS ESD skimming, servicing Tier-2s  How many Tier2s?  ATLAS: already identified 29  CMS: some 25  With overlap, assume some 50 T2s total(?)  This means that in the 12 months from ~August 2005 we have to add 2 T2s per week  Cannot possibly be done using the same model as for T1s  SC meeting at a T1 as it begins to come online for service challenges  Typically 2 day (lunchtime – lunchtime meeting

LCG Project, Grid Deployment Group, CERN GDB / SC meetings / T1 visit Plan  In addition to planned GDB meetings, Service Challenge Meetings, Network Meetings etc:  Visits to all Tier1 sites (initially)  Goal is to meet as many of the players as possible  Not just GDB representatives! Equivalents of Vlado etc.  Current Schedule:  Aim to complete many of European sites by Easter  “Round world” trip to BNL / FNAL / Triumf / ASCC in April  Need to address also Tier2s  Cannot be done in the same way!  Work through existing structures, e.g.  HEPiX, national and regional bodies etc.  e.g. GridPP (12)  Talking of SC Update at May HEPiX (FZK) with more extensive programme at Fall HEPiX (SLAC)  Maybe some sort of North American T2-fest around this?

LCG Project, Grid Deployment Group, CERN LCG Service Challenges - Overview  LHC will enter production (physics) in April 2007  Will generate an enormous volume of data  Will require huge amount of processing power  LCG ‘solution’ is a world-wide Grid  Many components understood, deployed, tested..  But…  Unprecedented scale  Humungous challenge of getting large numbers of institutes and individuals, all with existing, sometimes conflicting commitments, to work together  LCG must be ready at full production capacity, functionality and reliability in less than 2 years from now  Issues include h/w acquisition, personnel hiring and training, vendor rollout schedules etc.  Should not limit ability of physicist to exploit performance of detectors nor LHC’s physics potential  Whilst being stable, reliable and easy to use

LCG Project, Grid Deployment Group, CERN Key Principles  Service challenges results in a series of services that exist in parallel with baseline production service  Rapidly and successively approach production needs of LHC  Initial focus: core (data management) services  Swiftly expand out to cover full spectrum of production and analysis chain  Must be as realistic as possible, including end-end testing of key experiment use-cases over extended periods with recovery from glitches and longer-term outages  Necessary resources and commitment pre-requisite to success!  Should not be under-estimated!

LCG Project, Grid Deployment Group, CERN Initial Schedule (1/2)  Tentatively suggest quarterly schedule with monthly reporting  e.g. Service Challenge Meetings / GDB respectively  Less than 7 complete cycles to go!  Urgent to have detailed schedule for 2005 with at least an outline for remainder of period asap  e.g. end January 2005  Must be developed together with key partners  Experiments, other groups in IT, T1s, …  Will be regularly refined, ever increasing detail…  Detail must be such that partners can develop their own internal plans and to say what is and what is not possible  e.g. FIO group, T1s, …

LCG Project, Grid Deployment Group, CERN Initial Schedule (2/2)  Q1 / Q2: up to 5 T1s, writing to tape at 50MB/s per T1 (no expts)  Q3 / Q4: include two experiments and a few selected T2s  2006: progressively add more T2s, more experiments, ramp up to twice nominal data rate  2006: production usage by all experiments at reduced rates (cosmics); validation of computing models  2007: delivery and contingency  N.B. there is more detail in December GDB presentations  Need to be re-worked now!

LCG Project, Grid Deployment Group, CERN Agenda  Reminder of the Goals and Timelines of the LCG Service Challenges  Summary of LHC Experiments’ Computing Models  Outline of Service Challenges  Review of SC1  Status of SC2  Plans for SC3 and beyond

LCG Project, Grid Deployment Group, CERN Computing Model Summary - Goals  Present key features of LHC experiments’ Computing Models in a consistent manner  High-light the commonality  Emphasize the key differences  Define these ‘parameters’ in a central place (LCG web)  Update with change-log as required  Use these parameters as input to requirements for Service Challenges  To enable partners (T0/T1 sites, experiments, network providers) to have a clear understanding of what is required of them  Define precise terms and ‘factors’

LCG Project, Grid Deployment Group, CERN Where do these numbers come from?  Based on Computing Model presentations given to GDB in December 2004 and to T0/T1 networking meeting in January 2005  Documents are those publicly available for January LHCC review  Official website is protected  Some details may change but the overall conclusions do not!  Part of plan is to understand how sensitive overall model is to variations in key parameters  Iteration with experiments is on-going  i.e. I have tried to clarify any questions that I have had  Any mis-representation or mis-interpretation is entirely my responsibility  Sanity check: compare with numbers from MoU Task Force

LCG Project, Grid Deployment Group, CERN NominalThese are the raw figures produced by multiplying e.g. event size x trigger rate. HeadroomA factor of 1.5 that is applied to cater for peak rates. EfficiencyA factor of 2 to ensure networks run at less than 50% load. RecoveryA factor of 2 to ensure that backlogs can be cleared within 24 – 48 hours and to allow the load from a failed Tier1 to be switched over to others. Total Requirement A factor of 6 must be applied to the nominal values to obtain the bandwidth that must be provisioned. Arguably this is an over-estimate, as “Recovery” and “Peak load” conditions are presumably relatively infrequent, and can also be smoothed out using appropriately sized transfer buffers. But as there may be under-estimates elsewhere…

All numbers presented will be nominal unless explicitly specified

LCG Project, Grid Deployment Group, CERN Overview of pp running ExperimentSIMSIMESDRAWTriggerRECOAODTAG ALICE400KB40KB1MB100Hz200KB50KB10KB ATLAS2MB500KB1.6MB200Hz500KB100KB1KB CMS2MB400KB1.5MB150Hz250KB50KB10KB LHCb400KB25KB2KHz75KB25KB1KB

LCG Project, Grid Deployment Group, CERN pp questions / uncertainties  Trigger rates essentially independent of luminosity  Explicitly stated in both ATLAS and CMS CM docs  Uncertainty (at least in my mind) on issues such as zero suppression, compaction etc of raw data sizes  Discussion of these factors in CMS CM doc p22:  RAW data size ~300kB (Estimated from MC)  Multiplicative factors drawn from CDF experience  MC Underestimation factor 1.6  HLT Inflation of RAW Data, factor 1.25  Startup, thresholds, zero suppression,…. Factor 2.5  Real initial event size more like 1.5MB  Could be anywhere between 1 and 2 MB  Hard to deduce when the even size will fall and how that will be compensated by increasing Luminosity  i.e. total factor = 5 for CMS raw data  N.B. must consider not only Data Type (e.g. result of Reconstruction) but also how it is used  e.g. compare how Data Types are used in LHCb compared to CMS  All this must be plugged into the meta-model!

LCG Project, Grid Deployment Group, CERN Overview of Heavy Ion running ExperimentSIMSIMESDRAWTriggerRECOAODTAG ALICE300MB2.1MB12.5MB100Hz2.5MB250KB10KB ATLAS 5MB50Hz CMS7MB50Hz1MB200KBTBD LHCbN/A

LCG Project, Grid Deployment Group, CERN Heavy Ion Questions / Uncertainties  Heavy Ion computing models less well established than for pp running  I am concerned about model for 1 st /2 nd /3 rd pass reconstruction and data distribution  “We therefore require that these data (Pb-Pb) are reconstructed at the CERN T0 and exported over a four-month period after data taking. This should leave enough time for a second and third reconstruction pass at the Tier 1’s” (ALICE)  Heavy Ion model has major impact on those Tier1’s supporting these experiments  All bar LHCb!  Critical to clarify these issues as soon as possible…

LCG Project, Grid Deployment Group, CERN Data Rates from MoU Task Force MB/SecRALFNALBNLFZKIN2P3CNAFPICT0 Total ATLAS CMS ALICE LHCb T1 Totals MB/sec T1 Totals Gb/sec Estimated T1 Bandwidth Needed (Totals * 1.5(headroom))*2(capacity) Assumed Bandwidth Provisioned Spreadsheet used to do this calculation will be on Web. Table is in

LCG Project, Grid Deployment Group, CERN Data Rates using CM Numbers Steps:  Take Excel file used to calculate MoU numbers  Change one by one the Data Sizes as per latest CM docs  See how overall network requirements change  Need also to confirm that model correctly reflects latest thinking  And understand how sensitive the calculations are to e.g. changes in RAW event size, # of Tier1s, roles of specific Tier1s etc.

LCG Project, Grid Deployment Group, CERN Base Requirements for T1s  Provisioned bandwidth comes in units of 10Gbits/sec although this is an evolving parameter  From Reply to Questions from Computing MoU Task Force…  Since then, some parameters of the Computing Models have changed  Given the above quantisation, relatively insensitive to small-ish changes  Important to understand implications of multiple-10Gbit links, particularly for sites with Heavy Ion programme  For now, need plan for 10Gbit links to all Tier1s

LCG Project, Grid Deployment Group, CERN Response from ‘Networkers’ [Hans Döbbeling] believe the GEANT2 consortium will be able to deliver the following for the 7 European TIER1s: 1.list of networking domains and technical contacts 2.time plan of availability of services 1G VPN, 1G Lambda, 10Gig Lambda at the national GEANT2 pops and at the TIER1 sites. 3.a model for SLAs and the monitoring of SLAs 4.a proposal for operational procedures 5.a compilation of possible cost sharing per NREN Proposes that CERN focuses on issues related to non-European T1s

LCG Project, Grid Deployment Group, CERN Agenda  Reminder of the Goals and Timelines of the LCG Service Challenges  Summary of LHC Experiments’ Computing Models  Outline of Service Challenges  Review of SC1  Status of SC2  Plans for SC3 and beyond

Service Challenge Meeting “Review of Service Challenge 1” James Casey, IT-GD, CERN RAL, 26 January 2005

LCG Project, Grid Deployment Group, CERN Overview  Reminder of targets for the Service Challenge  What we did  What can we learn for SC2?

LCG Project, Grid Deployment Group, CERN Milestone I & II Proposal From NIKHEF/SARA Service Challenge Meeting  Dec04 - Service Challenge I complete  mass store (disk) -mass store (disk)  3 T1s (Lyon, Amsterdam, Chicago)  500 MB/sec (individually and aggregate)  2 weeks sustained  Software; GridFTP plus some scripts  Mar05 - Service Challenge II complete  Software: reliable file transfer service  mass store (disk) - mass store (disk),  5 T1’s (also Karlsruhe, RAL,..)  500 MB/sec T0-T1 but also between T1’s  1 month sustained

LCG Project, Grid Deployment Group, CERN Service Challenge Schedule From FZK Dec Service Challenge Meeting:  Dec 04  SARA/NIKHEF challenge  Still some problems to work out with bandwidth to teras system at SARA  Fermilab  Over CERN shutdown – best effort support  Can try again in January in originally provisioned slot  Jan 04  FZK Karlsruhe

LCG Project, Grid Deployment Group, CERN SARA – Dec 04  Used a SARA specific solution  Gridftp running on 32 nodes of SGI supercomputer (teras)  3 x 1Gb network links direct to teras.sara.nl  3 gridftp servers, one for each link  Did load balancing from CERN side  3 oplapro machines transmitted down each 1Gb link  Used radiant-load-generator script to generate data transfers  Much efforts was put in from SARA personnel (~1-2 FTEs) before and during the challenge period  Tests ran from 6-20 th December  Much time spent debugging components

LCG Project, Grid Deployment Group, CERN Problems seen during SC1  Network Instability  Router electrical problem at CERN  Interruptions due to network upgrades on CERN test LAN  Hardware Instability  Crashes seen on teras 32-node partition used for challenges  Disk failure on CERN transfer node  Software Instability  Failed transfers from gridftp. Long timeouts resulted in significant reduction in throughput  Problems in gridftp with corrupted files  Often hard to isolate a problem to the right subsystem

LCG Project, Grid Deployment Group, CERN SARA SC1 Summary  Sustained run of 3 days at end  6 hosts at CERN side. single stream transfers, 12 files at a time  Average throughput was 54MB/s  Error rate on transfers was 2.7%  Could transfer down each individual network links at ~40MB/s  This did not translate into the expected 120MB/s speed  Load on teras and oplapro machines was never high (~6-7 for a 32 node teras, < 2 for 2-node oplapro) Load on oplapro machines  See Service Challenge wiki for logbook kept during Challenge

LCG Project, Grid Deployment Group, CERN Gridftp problems  64 bit compatibility problems  logs negative numbers for file size > 2 GB  logs erroneous buffer sizes to the logfile if the server is 64-bits  No checking of file length on transfer  No error message doing a third party transfer with corrupted files  Issues followed up with globus gridftp team  First two will be fixed in next version.  The issue of how to signal problems during transfers is logged as an enhancement request

LCG Project, Grid Deployment Group, CERN FermiLab & FZK – Dec 04/Jan 05  FermiLab declined to take part in Dec 04 sustained challenge  They had already demonstrated 500MB/s for 3 days in November  FZK started this week  Bruno Hoeft will give more details in his site report

LCG Project, Grid Deployment Group, CERN What can we learn ?  SC1 did not succeed  We did not meet the milestone of 500MB/s for 2 weeks  We need to do these challenges to see what actually goes wrong  A lot of things do, and did, go wrong  We need better test plans for validating the infrastrcture before the challenges (network throughtput, disk speeds, etc…)  Ron Trompert (SARA) has made a first version of this  We need to proactively fix low-level components  Gridftp, etc…  SC2 and SC3 will be a lot of work !

LCG Project, Grid Deployment Group, CERN 2005 Q1(i) SC2 - Robust Data Transfer Challenge Set up infrastructure for 6 sites  Fermi, NIKHEF/SARA, GridKa, RAL, CNAF, CCIN2P3 Test sites individually – at least two at 500 MByte/s with CERN Agree on sustained data rates for each participating centre Goal – by end March sustained 500 Mbytes/s aggregate at CERN In parallel - serve the ATLAS “Tier0 tests” (needs more discussion) Full physics run SC First physics First beams cosmics

LCG Project, Grid Deployment Group, CERN Status of SC2

LCG Project, Grid Deployment Group, CERN 2005 Q1(ii) In parallel with SC2 – prepare for the next service challenge (SC3) Build up 1 GByte/s challenge facility at CERN  The current 500 MByte/s facility used for SC2 will become the testbed from April onwards (10 ftp servers, 10 disk servers, network equipment) Build up infrastructure at each external centre  Average capability ~150 MB/sec at a Tier-1 (to be agreed with each T-1) Further develop reliable transfer framework software  Include catalogues, include VO’s Full physics run First physics First beams cosmics SC3 SC2

LCG Project, Grid Deployment Group, CERN 2005 Q2-3(i) SC3 - 50% service infrastructure  Same T1s as in SC2 (Fermi, NIKHEF/SARA, GridKa, RAL, CNAF, CCIN2P3)  Add at least two T2s  “50%” means approximately 50% of the nominal rate of ATLAS+CMS Using the 1 GByte/s challenge facility at CERN -  Disk at T0 to tape at all T1 sites at 80 Mbyte/s  Data recording at T0 from same disk buffers  Moderate traffic disk-disk between T1s and T2s Use ATLAS and CMS files, reconstruction, ESD skimming codes Goal - 1 month sustained service in July  500 MBytes/s aggregate at CERN, 80 MBytes/s at each T1 SC SC First physics First beams cosmics Full physics run

LCG Project, Grid Deployment Group, CERN SC3 Planning  Meetings with other IT groups at CERN to refine goals of SC3 (milestone document) and steps that are necessary to reach them  IT-GM: definition of middleware required, schedule, acceptance tests etc  IT-ADC: pre-production and production services required, e.g. Database backends  IT-FIO: file transfer servers etc etc  IT-CS: network requirements etc  Informal discussions with experiments regarding involvement of production teams  Many details missing: (one being identification of T2s)

LCG Project, Grid Deployment Group, CERN SC3 Planning - cont  Base (i.e. non-experiment) software required for SC3 scheduled for delivery end-February 2005  Targeting service infrastructure for same date  Database services, Gridftp servers etc.  Acceptance testing during March  In parallel, have to start discussions with experiments, T1s etc

LCG Project, Grid Deployment Group, CERN 2005 Q2-3(ii) In parallel with SC3 prepare additional centres using the 500 MByte/s test facility  Test Taipei, Vancouver, Brookhaven, additional Tier-2s Further develop framework software  Catalogues, VO’s, use experiment specific solutions SC First physics First beams cosmics SC3 Full physics run

LCG Project, Grid Deployment Group, CERN 2005 – September-December (i) 50% Computing Model Validation Period The service exercised in SC3 is made available to experiments for computing model tests Additional sites are added as they come up to speed End-to-end data rates –  500 Mbytes/s at CERN (aggregate)  80 Mbytes/s at Tier-1s  Modest Tier-2 traffic SC First physics First beams cosmics SC3 Full physics run

LCG Project, Grid Deployment Group, CERN 2005 – September-December (ii) In parallel with the SC3 model validation period, in preparation for the first 2006 service challenge (SC4) – Using 500 MByte/s test facility  test PIC and Nordic T1s  and T2’s that are ready (Prague, LAL, UK, INFN,.. Build up the production facility at CERN to 3.6 GBytes/s Expand the capability at all Tier-1s to full nominal data rate SC First physics First beams cosmics SC3 Full physics run SC4

LCG Project, Grid Deployment Group, CERN January-August SC4 – full computing model services - Tier-0, ALL Tier-1s, all major Tier-2s operational at full target data rates (~1.8 GB/sec at Tier-0) - acquisition - reconstruction - recording – distribution, PLUS ESD skimming, servicing Tier-2s Goal – stable test service for one month – April % Computing Model Validation Period (May-August 2006) Tier-0/1/2 full model test - All experiments - 100% nominal data rate, with processing load scaled to 2006 cpus SC First physics First beams cosmics SC3 Full physics run SC4

LCG Project, Grid Deployment Group, CERN September The SC4 service becomes the permanent LHC service – available for experiments’ testing, commissioning, processing of cosmic data, etc. All centres ramp-up to capacity needed at LHC startup  TWICE nominal performance  Milestone to demonstrate this 6 months before first physics data SC First physics First beams cosmics SC3 Full physics run SC4 LHC Service Operation

LCG Project, Grid Deployment Group, CERN Timeline  Official target date for first collisions in LHC: April 2007  Including ski-week(s), this is only 2 years away!  But the real target is even earlier!  Must be ready 6 months prior to data taking  And data taking starts earlier than colliding beams!  Cosmics (ATLAS in a few months), calibrations, single beams, …

LCG Project, Grid Deployment Group, CERN Conclusions  To be ready to fully exploit LHC, significant resources need to be allocated to a series of service challenges by all concerned parties  These challenges should be seen as an essential on-going and long-term commitment to achieving production LCG  The countdown has started – we are already in (pre-)production mode  Next stop: 2020