The CMS-HI Computing Plan Vanderbilt University

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Upgrading the CMS simulation and reconstruction David J Lange LLNL April CHEP 2015D. Lange.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Simulation issue Y. Akiba. Main goals stated in LOI Measurement of charm and beauty using DCA in barrel –c  e + X –D  K , K , etc –b  e + X –B 
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Status of 2015 pledges 2016 requests RRB Report Concezio Bozzi INFN Ferrara LHCb NCB, November 3 rd 2014.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
A.Golunov, “Remote operational center for CMS in JINR ”, XXIII International Symposium on Nuclear Electronics and Computing, BULGARIA, VARNA, September,
Meeting, 5/12/06 CMS T1/T2 Estimates à CMS perspective: n Part of a wider process of resource estimation n Top-down Computing.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Online Reconstruction 1M.Ellis - CM th October 2008.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing for Alice at GSI (Proposal) (Marian Ivanov)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
CLIC - CDR Status (Volume 2) Hermann Schmickler, ILCW2010.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
H. Matis, S. Hedges, M. Placidi, A. Ratti, W. Turner [+several students] (LBNL) R. Miyamoto (now at ESSS) H. Matis - LARP CM18 - May 8, Fluka Modeling.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Online – Data Storage and Processing
Final CALICE OsC meeting: Status and summary of project
The CMS-HI Computing Plan Vanderbilt University
Make-to-Stock Scenario Overview
Ian Bird WLCG Workshop San Francisco, 8th October 2016
CLAS12 DAQ & Trigger Status
The Vanderbilt Effort in CMS Vanderbilt University
CMS-HI Offline Computing
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
evoluzione modello per Run3 LHC
The Vanderbilt Effort in CMS Vanderbilt University
DPG Activities DPG Session, ALICE Monthly Mini Week
for the Offline and Computing groups
Vanderbilt Tier 2 Project
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Off-line & GRID Computing
Make-to-Stock Scenario Overview
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Simulation use cases for T2 in ALICE
ALICE Computing Upgrade Predrag Buncic
ILD Ichinoseki Meeting
US ATLAS Physics & Computing
Preparations for the CMS-HI Computing Workshop in Bologna
ZEUS Computing Board Report
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Vanderbilt University
Nuclear Physics Data Management Needs Bruce G. Gibbard
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
CMS-HI Offline Computing
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
ATLAS DC2 & Continuous production
The ATLAS Computing Model
Status Report and highlights from workshop II
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

The CMS-HI Computing Plan Vanderbilt University Charles F. Maguire Vanderbilt University for the CMS-HI Group Version V1, May 29 21:41 Geneva time About half completed June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Outline of the Review Talks Dennis Hall: Support of Vanderbilt University for CMS-HI Bolek Wyslouch: Overview of CMS-HI Physics Plans Charles Maguire: Detailed View of CMS-HI Computing Markus Klute: T0 Operations in Support of CMS-HI Lothar Bauerdick: FNAL T1 Operations for CMS-HI Raphael Granier de Cassagnac: Contribution of non-US T2s Edward Wenger: HI Software Status Within CMS Model Alan Tackett: The Role of ACCRE in CMS-HI Computing Esfandiar Zafar: ITS Support of ACCRE in CMS-HI Alan Tackett: An Inspection Tour of the Proposed T2 Site June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Summary of Updated Computing Proposal CMS-HI Computing Plans Follow Very Closely the CMS Model Extensive use of world-wide CMS computing resources and working groups Continuous oversight by upper level CMS computing management Alignment, Calibration, and On-Line DQM at the T0 A Complete Prompt Reconstruction Pass to be Done at the T0 Archival Storage at the T0 and Standard Transfer to FNAL T1 Site Secondary Archival Storage at the FNAL T1 Transfer of Files to Disk Storage at a New Vanderbilt T2 Site Vanderbilt Site Will: Perform analysis passes on the prompt reco data set Distribute analyzed data sets to MIT and to four non-US T2 HI sites Complete multiple reconstruction and analysis re-passes on the data sets Contain a T3 component which will host US CMS-HI participants (MIT already does) An Enhanced Role of the MIT HI Site for Simulation and Analyses Non-US T2 Sites Contribute a Significant Fraction of Analysis Base June 2, 2010 DOE-NP On-Site Review at Vanderbilt

What is New and Different This Year Sustained Use of the T0 for Prompt Reconstruction Confirmed by new, rigorous simulations monitoring time and memory use Each annual HI data set can be promptly reconstructed in time at the T0: just a few days for the 2010 HI min bias data; 18 – 25 days for later HLT data NOTE: The T0 cannot be used as a T1 or a T2 site by CMS-HI (or by anyone else) Standard Transfer of HI Files from the T0 to the FNAL T1 To be supervised by the transfer group at the T0; simply one more month of duty Further details contained in Markus Klute’s presentation Strongly recommended by CMS computing management at Bologna workshop No adverse impact is seen for the pp program More explanations found in Lothar Bauerdick’s presentation CPU Requirements for CMS-HI Calibrated in HS06 Units Standard measure for the LHC experiments used throughout CMS CMS-HI software all runs in latest, official CMSSW framework Large investment of effort in determining processing times, memory use, and file sizes Careful Inventory of Potential non-US T2 HI Contributions Full accounting is tabulated in Raphael Granier de Cassagnac’s presentation June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Overview of HI Computing: DAQ and T0 Anticipated Start of HI Running Change to Pb + Pb acceleration expected on November 1 Two weeks of set-up time Physics data taking expected to begin in mid-November Pb + Pb running for 4 weeks, assuming 106 seconds live time Special DAQ Considerations (see Markus Klute’s talk) Non-zero suppression of ECal and HCal, leads to ~10 MBytes/event Zero suppression to be achieved off-line at the T0, leads to 3 MB/event Work flows and CPU requirements for this task to be completed in summer 2010 Data Processing AlCa and DQM to be performed on-line as in the pp running DQM for HI being advanced by Julia Velkovska (at Vanderbilt) and Pelin Kurt (at CERN “Live” demonstration during pp running scheduled at the post-review ROC tour Prompt reco of the zero-suppressed files at the T0, < 7 days total See the next set of slides for how the CPU requirements were determined Leads to 1 MByte/event, ~400 TB of raw data and prompt reco files transferred to FNAL T1 Processing after FNAL should be at Vanderbilt (but see back-up “Plan B” later) June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Determination of the CMS-HI Computing Requirements In a Processor Independent Fashion June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Method of Determining CPU Requirements All CMS-HI Software is in Standard CMSSW Release Schedule Major accomplishment since the last review (see Edward Wenger’s talk) Ensures that the validation tests are done and the timing results are correct Simulation Data Testing (done by MIT and Vanderbilt graduate students) Ran in CMS simulation configurations modified especially for HI use Looked at minimum bias events for modeling 2010 HI run Looked at central events (< 10% centrality) for modeling future HLT runs Each stage of the simulation process was specifically checked GEANT4 tracking stage in the CMS detector (by far the most time consuming step) Reconstruction step, using simulated raw data files from the first step CPU times, memory consumptions, and file output sizes were all recorded Tests were done on processors with already known HS06 rating Estimate of CPU Requirements Was Made in a Data-Driven Fashion Scaled according to projected kind and annual number of events (see next slide) Provision was made for a standard number of reco and analysis re-passes annually Experienced based assumptions were made for analysis and simulation demands June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Annual Raw Data Volume Projections Table 4 (page 20): Projected luminosity and up-time profile for CMS-HI runs Year Ave. Lumin. (cm-2s-1) Up Time (s) Events Taken (106) Raw Data (TByte) 2010 2 – 5 x 1025 105 40 – 80 (Min Bias) ~250 2011 5 x 1026 5 x 105 50 (HLT) 150 2013 1 x 1027 106 75 (HLT) 300 2014 Notes: 1) First year figures are relatively uncertain 2) HI collision event size characteristics are completely unexplored at this energy 3) LHC down year is in 2012 June 2, 2010 DOE-NP On-Site Review at Vanderbilt

HI Data Reconstruction Estimates Table 5 (page 21): Projected raw data reconstruction computing effort Year Trigger Events (106) Compute Load (1010 HS06-sec) T0 Reco (days) Re-passes Time at VU (Days/Re-pass) 2010 Min Bias 105 1.7 4 3 72 2011 HLT 5 x 105 8.5 18 1.3 133 2012 No Beam Above None 2.7 65 2013 106 12.8 25 2 79 2014 Notes: 1) Column four compute-load is the integrated power for one reconstruction pass 2) Column six assumes the four year annual growth of HS06 power at VU to be 3268, 8588, 17708, 23028 3) Column six also assumes VU reco fractions are 0.65, 0.55, 0.55, 0.45, and 0.45 June 2, 2010 DOE-NP On-Site Review at Vanderbilt

HI Data Analysis Estimates Table 8 (page 24): Integrated T2 Computing Load Compared to Available Resources Year Analysis + Simulation Need (1011 HS06-sec) Vanderbilt T2 Total T2 Base Ratio: Available/Need 2010 1.47 0.29 1.52 104% 2011 2.54 0.98 2.45 97% 2012 3.73 2.01 100% 2013 4.71 3.20 5.15 109% 2014 114% Notes: 1) Column two analysis and simulation needs are computed in back-up slides 2) Column three VU T2 values are computed using HS06 growth model in slide 9 3) Column four total T2 base assumes an MIT HS06 growth model of 1900 (already in place), 2850, 3800, 4750, and 5700 4) Column four total T2 base also assumes 3000 HS06 from non-US T2 HI sites June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Determination of the Storage Volume Requirements June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Vanderbilt Local Infrastructure: ACCRE and ITS June 2, 2010 DOE-NP On-Site Review at Vanderbilt

MIT Heavy Ion Analysis Center June 2, 2010 DOE-NP On-Site Review at Vanderbilt

HI Computing Operations June 2, 2010 DOE-NP On-Site Review at Vanderbilt

HI View of Personnel Responsibilities 1) It is important that all the CMS experts in each system shown on the left (Detector, Readout, ...) realize that their expertise will be critically needed during the HI run. For some systems, these experts will be almost completely in charge, while in others (RECO, SIM, ...) the HI persons will be taking the lead in giving directions. 2) In April we experienced the loss of a key HI person with knowledge of the T0 operations. It is very unlikely we can replace that person in time to make a difference in the summer preparations. So early, dedicated help for the T0 operations assisting the HI students and post-docs will be vital June 2, 2010 DOE-NP On-Site Review at Vanderbilt

HI Computing Organization June 2, 2010 DOE-NP On-Site Review at Vanderbilt

DOE-NP On-Site Review at Vanderbilt Proposal Budget June 2, 2010 DOE-NP On-Site Review at Vanderbilt

DOE-NP On-Site Review at Vanderbilt Project Management June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Summary of Responses to First Review Comments June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Comments on Resource Requirements To Be Completed June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Comments on Quality of Service Burdens To Be Completed June 2, 2010 DOE-NP On-Site Review at Vanderbilt

Comments on Computing Operations To Be Completed June 2, 2010 DOE-NP On-Site Review at Vanderbilt

DOE-NP On-Site Review at Vanderbilt Summary To Be Completed June 2, 2010 DOE-NP On-Site Review at Vanderbilt

DOE-NP On-Site Review at Vanderbilt Backup Slides (TO BE COMPLETED) June 2, 2010 DOE-NP On-Site Review at Vanderbilt