Preparations for the CMS-HI Computing Workshop in Bologna

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Workload Management Massimo Sgaravatto INFN Padova.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg 1 HLT and the Alignment & Calibration DB.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Preparation of KIPT (Kharkov) computing facilities for CMS data analysis L. Levchuk Kharkov Institute of Physics and Technology (KIPT), Kharkov, Ukraine.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing for Alice at GSI (Proposal) (Marian Ivanov)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Online – Data Storage and Processing
Workload Management Workpackage
Grid Operations Centre Progress to Aug 03
The CMS-HI Computing Plan Vanderbilt University
ISO/IEC
Ian Bird WLCG Workshop San Francisco, 8th October 2016
The Vanderbilt Effort in CMS Vanderbilt University
Belle II Physics Analysis Center at TIFR
SuperB and its computing requirements
CMS-HI Offline Computing
The “Understanding Performance!” team in CERN IT
Pasquale Migliozzi INFN Napoli
evoluzione modello per Run3 LHC
The Vanderbilt Effort in CMS Vanderbilt University
Data Challenge with the Grid in ATLAS
for the Offline and Computing groups
The CMS-HI Computing Plan Vanderbilt University
Vanderbilt Tier 2 Project
Bernd Panzer-Steindel, CERN/IT
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Preparation for the Di-Jet Tsukuba
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Grid Canada Testbed using HEP applications
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Vanderbilt University
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
Status of CMS-HI Compute Proposal for USDOE
Status of CMS-HI Compute Proposal for USDOE
Heavy Ion Physics Program of CMS Proposal for Offline Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
CMS-HI Offline Computing
Preparations for Reconstruction of Run7 Min Bias PRDFs at Vanderbilt’s ACCRE Farm (more substantial update set for next week) Charles Maguire et al. March.
ATLAS DC2 & Continuous production
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
Expanding the PHENIX Reconstruction Universe
Presentation transcript:

Preparations for the CMS-HI Computing Workshop in Bologna September 12, 2009 Charles F. Maguire Vanderbilt University for the CMS-HI Program August 19, 2009 MIT-Vanderbilt EVO Meeting

Workshop in Bologna, Sep. 12 Present and Perfect the CMS-HI Computing Model No EVO transmissions, will rely mostly on on-site participants Can look into audio links (local call to Bologna number, how much all day?) Draft agenda prepared by Matthias and myself Need to provide more details to the agenda contents Need to assign participants who will be there and block out time allocations Preparation Work Done So Far Contacts and and responses from T0/CAF managers for CMS pp program Bolek has investigated what is needed for AliCal during HI running Julia is checking on the DQM operations, more consultation expected Contacts and first responses from Tier1 and Tier2 sites (renegotiations?): France (Tier1 + Tier2): new e-mail expected in late August, probable 2010 commitment Turkey (Tier2): resources committed for 2010, no one present at Bologna Network testing to/from Vanderbilt will start Sao Paulo (Tier2): resources possible in 2010, no one present at Bologna Russia (Tier2): Official Tier2 site for HI, distributed over several institutions Network testing to/from Vanderbilt in progress; will Olga attend or is she on vacation? Substantial progress on reconstruction CPU times and memory footprint Need to examine corresponding simulation and analyses requirements August 19, 2009 MIT-Vanderbilt EVO Meeting

MIT-Vanderbilt EVO Meeting Draft Agenda, Part I Opening Remarks: Kasemann, Wyslouch, and Maguire (15 minutes each) Goals of the workshop, major questions, impact on future CMS review meetings CMS-HI physics and institutional overview, working relationship with CMS-HEP General description of CMS-HI computing mode, similarities/differences with CMS-HEP T0/CAF session: Gutsche Oliver, Markus Klute, and Bolek Wyslouch The generation and processing of alignment/calibration files during data taking The initial data reconstruction at the Tier0 and the computing requirements for this task CPU and mass storage requirements session: Maguire proxy for Eric’s work The estimates of the off-line compute requirements based on the physics research HI data tiers, sizes and total data volumes Resources proposed and committed: Various names + Maguire The initial availability of Tier1 facilities for CMS-HI in 2010 The initial availability of Tier2 facilities for CMS-HI in 2010 and later August 19, 2009 MIT-Vanderbilt EVO Meeting

MIT-Vanderbilt EVO Meeting Draft Agenda, Part II Data processing and analysis session: Maguire Transport of the raw data and processed files to the Vanderbilt site for archiving Benchmarking of current network tests Analysis of reconstruction output at the Vanderbilt site and at overseas sites Monte Carlo production session: MIT and Russia (?) speakers overview of Monte Carlo software, memory and CPU requirements the MC production at the MIT and Moscow Tier2 sites production at other sites? Session on operations: Various (Sheldon, Tackett, proxy for Velkovska) role of Data Operations, Facility Operations, Analysis Operations in HI data collection, processing and production special needs on DQM, monitoring, calibration and alignment, required development effort Summary session: collect open questions, action items August 19, 2009 MIT-Vanderbilt EVO Meeting

Tasks for PInGs and Computing Groups From HI West Meeting, August 14 First year estimates of analysis and disk space requirements for 50 TBytes of minimum bias data These estimates should be tied to the publication goals which we have for the first year Establish the physics goals for the second and third year HLT runs Determine the CPU and disk requirements for these HLT runs Tasks For Computing Group (names should be attached to these tasks) Based on the draft agenda shown in the next two pages Establish the Alignment/Calibration work flow and supervision Be ready and able to utilize the Tier0/CAF resources during HI running Demonstrate that the grid network of CMS-HI is reliable and sufficient Prove that the CMS-HI reco/analysis job streams work in a large scale Confirm all the non-Tier0 compute resources for 2010 and later August 19, 2009 MIT-Vanderbilt EVO Meeting