1 M. Paganoni, 17/1/08 Modello di calcolo di CMS M. Paganoni Workshop Storage T2 - 17/01/08.

Slides:



Advertisements
Similar presentations
Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Advertisements

Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
December Pre-GDB meeting1 CCRC08-1 ATLAS’ plans and intentions Kors Bos NIKHEF, Amsterdam.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Claudio Grandi INFN Bologna CMS Operations Update Ian Fisk, Claudio Grandi 1.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Distributed Computing and Data Analysis for CMS in view of the LHC startup Peter Kreuzer RWTH-Aachen IIIa International Symposium on Grid Computing (ISGC)
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Nicola De Filippis CMS Italia, Napoli, Feb p. 1 Produzioni MC ai Tiers CMS nel 2007: prospettive CMS-wide e contributo italiano Università,
1 M. Paganoni, HCP2007 Computing tools and analysis architectures: the CMS computing strategy M. Paganoni HCP2007 La Biodola, 23/5/2007.
1. Maria Girone, CERN  Q WLCG Resource Utilization  Commissioning the HLT for data reprocessing and MC production  Preparing for Run II  Data.
64 th PRC, DESY, Nov 2007 Wolfram Zeuner 1 CMS On Behalf of the DESY-CMS Group Wolfram Zeuner Organization Trigger & DQM Computing – CSA07 Tracker.
All CMS 30 May07 TSV1 All CMS 3 30 May 2007 LHC machine CMS Progress Overall Schedule.
Claudio Grandi INFN Bologna CERN - WLCG Workshop 13 November 2008 CMS - Plan for shutdown and data-taking preparation Claudio Grandi Outline: Global Runs.
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
1 Andrea Sciabà CERN The commissioning of CMS computing centres in the WLCG Grid ACAT November 2008 Erice, Italy Andrea Sciabà S. Belforte, A.
04/09/2007 Reconstruction of LHC events at CMS Tommaso Boccali - INFN Pisa Shahram Rahatlou - Roma University Lucia Silvestris - INFN Bari On behalf of.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Belle II Computing Fabrizio Bianchi INFN and University of Torino Meeting Belle2 Italia 17/12/2014.
Maria Girone, CERN CMS Status Report Maria Girone, CERN David Lange, LLNL.
Deployment timelines LHCb CMS ATLAS 2007 Dec Nov Oct Sep Aug Jul Jun
Jan 2016 Solar Lunar Data.
LCG Service Challenge: Planning and Milestones
Technical Coordination Parallel Session November 2011, FNAL
Data Challenge with the Grid in ATLAS
INFN-GRID Workshop Bari, October, 26, 2004
Main next computing activities
CMS transferts massif Artem Trunov.
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
CMS computing: model, status and plans
Zhongliang Ren 12 June 2006 WLCG Tier2 Workshop at CERN
Computing Overview Topics here: CSA lessons (briefly) PADA
T1 visit to IN2P3 Computing
N. De Filippis - LLR-Ecole Polytechnique
Average Monthly Temperature and Rainfall




Gantt Chart Enter Year Here Activities Jan Feb Mar Apr May Jun Jul Aug
Q1 Q2 Q3 Q4 PRODUCT ROADMAP TITLE Roadmap Tagline MILESTONE MILESTONE
CMX scope and functionalities
PCS Day - Feedback 23th November 2016, Vienna.


Grid Computing in CMS: Remote Analysis & MC Production


Text for section 1 1 Text for section 2 2 Text for section 3 3
ATLAS DC2 & Continuous production
Text for section 1 1 Text for section 2 2 Text for section 3 3
Text for section 1 1 Text for section 2 2 Text for section 3 3
The ATLAS Computing Model
Text for section 1 1 Text for section 2 2 Text for section 3 3
Q1 Q2 Q3 Q4 PRODUCT ROADMAP TITLE Roadmap Tagline MILESTONE MILESTONE

Text for section 1 1 Text for section 2 2 Text for section 3 3
Text for section 1 1 Text for section 2 2 Text for section 3 3
Text for section 1 1 Text for section 2 2 Text for section 3 3
Text for section 1 1 Text for section 2 2 Text for section 3 3
Text for section 1 1 Text for section 2 2 Text for section 3 3
Text for section 1 1 Text for section 2 2 Text for section 3 3


Q1 Q2 Q3 Q4 PRODUCT ROADMAP TITLE Roadmap Tagline MILESTONE MILESTONE
Presentation transcript:

1 M. Paganoni, 17/1/08 Modello di calcolo di CMS M. Paganoni Workshop Storage T2 - 17/01/08

2 M. Paganoni, 17/1/08

3 Testing system at scale

4 M. Paganoni, 17/1/08 Terminated jobs / 6 days Currently: >20k jobs/day  Massive MC production  Constant, significant presence of analysis jobs complemented by JobRobot tests  Middleware tests, CRAB Analysis Server tests, … Significant, constant user analysis CSA06 job submission (dominated by Job Robot) Massive MC production for CSA07    More JobRobot… mw testing   Data processing: 1 year

5 M. Paganoni, 17/1/08 Event Size Current event size requirements estimated from measurements of CSA07 samples: RAWsim ~ 2.0 MB RAW ~ 1.5 MB RECO ~ 0.5 MB AOD ~ 0.2 MB Scope for reducing size: Remove duplication / optimize resources Handling RAW and RECO in 2 files (no FEVT) Mini-workshop with Physics and DPG Groups on January 28/29 at CERN to discuss and prepare implementation plan (RECO and AOD) Implementation during February 2008 in time to be included in CMSSW_2_0 release /CSA07BJet/CMSSW_1_6_4-CSA07-Tier0-A2-Chowder/RECO

6 M. Paganoni, 17/1/08 Processing time Current event size requirements estimated from measurements of CSA07 samples: SIM ~ 180 kSI2K*s (2 volte quanto previsto) RECO ~ kSI2K*s AOD ~ 0.25 kSI2K*s Results of task force are being implemented in CMSSW_2_0 release (Feb. 08)

7 M. Paganoni, 17/1/08 Loadtest07 (2.5TB/week in upload and 12 TB/week in download) Roma->* * ->Roma LNL->* * ->LNL

8 M. Paganoni, 17/1/08 Pisa->* Bari->* * ->Bari * ->Pisa

9 M. Paganoni, 17/1/08 MC Prod (50 M/month) 50 % at OSG 75 % from T2

10 M. Paganoni, 17/1/08

11 M. Paganoni, 17/1/08 Risorse usate da LCG3

12 M. Paganoni, 17/1/08 Commissioning Detector (2007) TK: dati processati alla TIF nel 2007 (25 % Tracciatore acquisiti in 2 mesi) = 20 TB Muoni: dati acquisiti nell’ultimo Global Run (2 settimane) = 2 TB ECAL: 1.acquisizione dati in modalità locale ECAL con trigger cosmico 2.acquisizione dati in modalità locale ECAL con trigger di piedistallo, test- pulse o laser. Evento ECAL non zero-suppressed = 1.8 MB Rate trigger locale ECAL = 5 Hz 500 GB/day per run locale ECAL

13 M. Paganoni, 17/1/08 Produzione MC 2008 INFN produce 15% di CMS 10 9 ev/anno in totale 15 giorni di buffer per sRAW+RECO Tutti gli AOD in 2 copie 380 TB 1500 kSI2K

14 M. Paganoni, 17/1/08 Global Run Hz x 10 7 s * 75% = 1.5 x 10 8 ev 25 kSI2k s/ev 5 reprocessing Assumendo di portare il 5% dei RAW nei T2 INFN per studi di calibrazione/allineamento e commissioning: 110 TB 200 kSI2K

15 M. Paganoni, 17/1/08 Possible Analysis Workflows RECO AOD RAW First pass at Tier0/CAF Central analysis skims at Tier1 AOD RECO, AOD shipped at Tier1 Analysis algos Analysis Data AOD + Analysis skim output shipped at Tier2 Analysis Data AOD + Further selection, Reduced output Further selection, Reduced output Analysis Data Fewer AOD coll. fast processing and FWLite at Tier3 Final analysis pre-selection at Tier2 Final samples shipped at Tier3

16 M. Paganoni, 17/1/08 Modello di analisi 10 gruppi di analisi/T eventi per analisi su 3x10 8 : AOD (2 copie) e 5% RAW+RECO 5 volte su dati e 5 su simulazione 0.25 kSI2K s/ev 160 TB 100 kSI2K

17 M. Paganoni, 17/1/08 Operazioni tape in/out al T0

18 M. Paganoni, 17/1/08 importanza di: buffer I/O file di size grande per CASTOR

19 M. Paganoni, 17/1/08 Tier2 CMS-INFN disco 2007 CPU 2007 (TBN)(KSI2k) Bari38284 LNL72420 Pisa Roma62222 TOT PLEDGED 2008

20 M. Paganoni, 17/1/08 Aug Sep Oct Nov Dec Jan Feb Mar Apr May Tracker Insertion MC Production for Startup S/w Release 2_0 (CCR_4T, Production startup MC samples) 1) Detector Installation, Commissioning & Operation 2) Preparation of Software, Computing & Physics Analysis Beam-pipe Closed and Baked- out CMS Cosmic Run CCR_0T Several short periods Dec-Mar) Cosmic Run CCR_4T 1 EE endcap Installed, Pixels installed 2007 Physics Analyses First Results Out CSA07 Test Magnet at low current S/w Release 1_7 ( CCR_0T, HLT Validation) S/w Release 1_6 ( CSA07) S/w Release 1_8 ( Lessons of ‘07) CSA08 (CCRC) Combined Computing Readiness Challenge Master Contingency 2nd ECAL Endcap Ready for Installation end Jun’08 V36 Schedule (Nov’07) Functional Tests CSA08 (CCRC) Last Heavy Element Lowered Cooldown of Magnet: Test