SC4 Planning Planning for the Initial LCG Service September 2005.

Slides:



Advertisements
Similar presentations
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
CERN – June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data … … 40 million times per second.
INFSO-RI Enabling Grids for E-sciencE The Grid Challenges in LHC Service Deployment Patricia Méndez Lorenzo CERN (IT-GD) Linköping.
Les Les Robertson WLCG Project Leader WLCG – Worldwide LHC Computing Grid Where we are now & the Challenges of Real Data CHEP 2007 Victoria BC 3 September.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Ian M. Fisk Fermilab February 23, Global Schedule External Items ➨ gLite 3.0 is released for pre-production in mid-April ➨ gLite 3.0 is rolled onto.
Les Les Robertson LCG Project Leader LCG - The Worldwide LHC Computing Grid LHC Data Analysis Challenges for 100 Computing Centres in 20 Countries HEPiX.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
SRM 2.2: status of the implementations and GSSD 6 th March 2007 Flavia Donno, Maarten Litmaath INFN and IT/GD, CERN.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
The Worldwide LHC Computing Grid WLCG Service Ramp-Up LHCC Referees’ Meeting, January 2007.
INFSO-RI Enabling Grids for E-sciencE SA1 and gLite: Test, Certification and Pre-production Nick Thackray SA1, CERN.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
LCG Service Challenges: Planning for Tier2 Sites Update for HEPiX meeting Jamie Shiers IT-GD, CERN.
Jürgen Knobloch/CERN Slide 1 A Global Computer – the Grid Is Reality by Jürgen Knobloch October 31, 2007.
Zprávy z ATLAS SW Week March 2004 Seminář ATLAS SW CZ Duben 2004 Jiří Chudoba FzÚ AV CR.
CHEP – Mumbai, February 2006 State of Readiness of LHC Computing Infrastructure Jamie Shiers, CERN.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
LCG CCRC’08 Status WLCG Management Board November 27 th 2007
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
The LHC Computing Environment Challenges in Building up the Full Production Environment [ Formerly known as the LCG Service Challenges ]
Plans for Service Challenge 3 Ian Bird LHCC Referees Meeting 27 th June 2005.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
Report from GSSD Storage Workshop Flavia Donno CERN WLCG GDB 4 July 2007.
LCG ** * * * * * * * * * * Deploying the World’s Largest Scientific Grid: The LHC Computing Grid –
Maria Girone CERN - IT Tier0 plans and security and backup policy proposals Maria Girone, CERN IT-PSS.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
The Worldwide LHC Computing Grid Introduction & Housekeeping Collaboration Workshop, Jan 2007.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
SL5 Site Status GDB, September 2009 John Gordon. LCG SL5 Site Status ASGC T1 - will be finished before mid September. Actually the OS migration process.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Operations model Maite Barroso, CERN On behalf of EGEE operations WLCG Service Workshop 11/02/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Database Project Milestones (+ few status slides) Dirk Duellmann, CERN IT-PSS (
LCG Service Challenges: Progress Since The Last One –
Summary of SC4 Disk-Disk Transfers LCG MB, April Jamie Shiers, CERN.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Planning the Planning for the LCG Service Challenges Jamie Shiers, CERN-IT-GD 28 January 2005.
Top 5 Experiment Issues ExperimentALICEATLASCMSLHCb Issue #1xrootd- CASTOR2 functionality & performance Data Access from T1 MSS Issue.
ATLAS Computing Model Ghita Rahal CC-IN2P3 Tutorial Atlas CC, Lyon
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
The Worldwide LHC Computing Grid WLCG Milestones for 2007 Focus on Q1 / Q2 Collaboration Workshop, January 2007.
LCG Service Challenges: Progress Since The Last One –
Operations Workshop Introduction and Goals Markus Schulz, Ian Bird Bologna 24 th May 2005.
T0-T1 Networking Meeting 16th June Meeting
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
Computing Operations Roadmap
The LHC Computing Environment
LCG Service Challenge: Planning and Milestones
1er Colloquium LCG France
Database Readiness Workshop Intro & Goals
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Readiness of ATLAS Computing - A personal view
The LCG Service Challenges: Ramping up the LCG Service
An introduction to the ATLAS Computing Model Alessandro De Salvo
LHC Data Analysis using a worldwide computing grid
Overview & Status Al-Ain, UAE November 2007.
ATLAS DC2 & Continuous production
Presentation transcript:

SC4 Planning Planning for the Initial LCG Service September 2005

LCG Service Challenges: Delivering the Production Service 2 Introduction  The LCG Service Challenges are about preparing, hardening and delivering the production LHC Computing Environment  The date for delivery of the production LHC Computing Environment is 30 September 2006  Production Services are required as from 1 September 2005 (service phase of Service Challenge 3) and 1 May 2006 (service phase of Service Challenge 4)  This is not an exercise.

LCG Service Challenges: Delivering the Production Service 3 pp / AA data rates (equal split) CentreALICEATLASCMSLHCbRate into T1 (pp)Rate into T1 (AA) ASCC, Taipei CNAF, Italy PIC, Spain IN2P3, Lyon GridKA, Germany RAL, UK BNL, USA (all ESD)11.3 FNAL, USA (expect more)16.9 TRIUMF, Canada NIKHEF/SARA, NL Nordic Data Grid Facility Totals61076 N.B. these calculations assume equal split as in Computing Model documents. It is clear that this is not the ‘final’ answer…

LCG Service Challenges: Delivering the Production Service 4 pp data rates – ‘weighted’ CentreALICEATLASCMSLHCbRate into T1 (pp) ASCC, Taipei-8%10%-100 CNAF, Italy7% 13%11%200 PIC, Spain-5% 6.5%100 IN2P3, Lyon9%13%10%27%200 GridKA, Germany????200 RAL, UK-7%3%15%150 BNL, USA-22%--200 FNAL, USA--28%-200 TRIUMF, Canada-4%--50 NIKHEF/SARA, NL3%13%-23%150 Nordic Data Grid Facility6% --50 Totals----1,600 Full AOD & TAG to all T1s (probably not in early days)

LCG Service Challenges: Delivering the Production Service 5 Data Rates  2 years before data taking can transfer from SRM at CERN to DPM at T1 at ~target data rate  Stably, reliably, days on end  Need to do this to all T1s at target data rates to tape  Plus factor 2 for backlogs / peaks  Need to have fully debugged recovery procedures  Data flows from re-processing need to be discussed  New ESD copied back to CERN (and to another T1 for ATLAS)  AOD and TAG copied to other T1s, T0, T2s (subset for AOD?)

LCG Service Challenges: Delivering the Production Service 6 Services at CERN  Building on ’standard service model’ 1.First level support: operations team  Box-level monitoring, reboot, alarms, procedures etc 2.Second level support team: Grid Deployment groupGrid Deployment group  Alerted by operators and/or alarms (and/or production managers…)  Follow ‘smoke-tests’ for applications  Identify appropriate 3 rd level support team to call  Responsible for maintaining and improving procedures  Two people per week: complementary to Service Manager on Duty  Provide daily report to SC meeting (09:00); interact with experiments  Members: IT-GD-EIS, IT-GD-SC (including me)  Phone numbers: ; Third level support teams: by service  Notified by 2 nd level and / or through operators (by agreement) (Definition of a service?)  Should be called (very) rarely… (Definition of a service?)

LCG Service Challenges: Delivering the Production Service 7 Service Challenge 4 – SC4  SC4 starts April 2006 FULL PRODUCTION SERVICE  SC4 ends with the deployment of the FULL PRODUCTION SERVICE  Deadline for component (production) delivery: end January 2006  Adds further complexity over SC3 – ‘extra dimensions’  Additional components and services, e.g. COOL and other DB-related applications  Analysis Use Cases  SRM 2.1 features required by LHC experiments  have to monitor progress!  MostTier2s, all Tier1s at full service level  Anything that dropped off list for SC3…  Services oriented at analysis & end-user  What implications for the sites?  Analysis farms:  Batch-like analysis at some sites (no major impact on sites)  Large-scale parallel interactive analysis farms and major sites  (100 PCs + 10TB storage) x N  User community:  No longer small (<5) team of production users  work groups of people  Large (100s – 1000s) numbers of users worldwide

LCG Service Challenges: Delivering the Production Service 8 Analysis Use Cases (HEPCAL II)  Production Analysis (PA)  Goals in ContextCreate AOD/TAG data from input for physics analysis groups  ActorsExperiment production manager  TriggersNeed input for “individual” analysis  (Sub-)Group Level Analysis (GLA)  Goals in ContextRefine AOD/TAG data from a previous analysis step  ActorsAnalysis-group production manager  TriggersNeed input for refined “individual” analysis  End User Analysis (EA)  Goals in ContextFind “the” physics signal  ActorsEnd User  TriggersPublish data and get the Nobel Prize :-)

LCG Service Challenges: Delivering the Production Service 9 SC4 Timeline  Now - September: clarification of SC4 Use Cases, components, requirements, services etc.  October 2005: SRM 2.1 testing starts; FTS/MySQL; target for post-SC3 services  January 31 st 2006: basic components delivered and in place  This is not the date the s/w is released – it is the date production services are ready  February / March: integration testing  February: SC4 planning workshop at CHEP (w/e before)  March 31 st 2006: integration testing successfully completed  April 2006: throughput tests  May 1 st 2006: Service Phase starts (note compressed schedule!)  September 30 th 2006: Initial LHC Service in stable operation  Summer 2007: first LHC event data

LCG Service Challenges: Delivering the Production Service 10 SC4 Use Cases (?) Not covered so far in Service Challenges:  T0 recording to tape (and then out)  Reprocessing at T1s  Calibrations & distribution of calibration data  HEPCAL II Use Cases  Individual (mini-) productions (if / as allowed) Additional services to be included:  Full VOMS integration  COOL, other AA services, experiment-specific services (e.g. ATLAS HVS)  PROOF? xrootd? (analysis services in general…)  Testing of next generation IBM and STK tape drives

LCG Service Challenges: Delivering the Production Service 11 Remaining Challenges  Bring core services up to robust 24 x 7 standard required  Bring remaining Tier2 centres into the process  Identify the additional Use Cases and functionality for SC4  Build a cohesive service out of distributed community  Clarity; simplicity; ease-of-use; functionality  Getting the (stable) data rates up to the target