LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Introduction to CMS computing CMS for summer students 7/7/09 Oliver Gutsche, Fermilab.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Randall Sobie The ATLAS Experiment Randall Sobie Institute for Particle Physics University of Victoria Large Hadron Collider (LHC) at CERN Laboratory ATLAS.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
7/22/99J. Shank US ATLAS Meeting BNL1 Tier 2 Regional Centers Goals Short-Term: Code development centers Simulation centers Data repository Medium-term.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
LHC Computing Plans Scale of the challenge Computing model Resource estimates Financial implications Plans in Canada.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SuperBelle Collaboration Meeting December 2008 Martin Sevior University of Melbourne A Computing Model for SuperBelle This is an idea for discussion only!
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
US ATLAS Tier 1 Facility Rich Baker Brookhaven National Laboratory DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National Laboratory.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Tier 1 at Brookhaven (US / ATLAS) Bruce G. Gibbard LCG Workshop CERN March 2004.
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
US ATLAS Tier 1 Facility Rich Baker Deputy Director US ATLAS Computing Facilities October 26, 2000.
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
10-Jan-00 CERN Building a Regional Centre A few ideas & a personal view CHEP 2000 – Padova 10 January 2000 Les Robertson CERN/IT.
U.S. ATLAS Computing Facilities DOE/NFS Review of US LHC Software & Computing Projects Bruce G. Gibbard, BNL January 2000.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
U.S. ATLAS Computing Facilities U.S. ATLAS Physics & Computing Review Bruce G. Gibbard, BNL January 2000.
The ATLAS Computing & Analysis Model Roger Jones Lancaster University ATLAS UK 06 IPPP, 20/9/2006.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
WLCG November Plan for shutdown and 2009 data-taking Kors Bos.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Hall D Computing Facilities Ian Bird 16 March 2001.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
evoluzione modello per Run3 LHC
Russian Regional Center for LHC Data Analysis
ALICE Computing Model in Run3
An introduction to the ATLAS Computing Model Alessandro De Salvo
ALICE Computing Upgrade Predrag Buncic
New strategies of the LHC experiments to meet
US ATLAS Physics & Computing
Heavy Ion Physics Program of CMS Proposal for Offline Computing
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
Development of LHCb Computing Model F Harris
Presentation transcript:

LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University

LHC Computing Resources 24 March ATLAS Computing organization simulationreconstructiondatabasecoordinator QC groupsimulation reconstruction databaseArch. team Event filter Technical Group National Comp. Board Comp. Steering Group Physics Comp. Oversight Board Detector system

LHC Computing Resources 24 March Scales of Effort  Best benchmarks are Tevatron Collider Experiments (CDF, D0)  Scaling:  CPU – factor of 1000 to LHC (event complexity)  Data volume – 10x to 100x  User/developer community: 5x  Distribution effort: 5x

LHC Computing Resources 24 March The ATLAS Computing Model  Data sizes/event (CTP numbers):  RAW : 1 MB (100 Hz)  ESD : 100 kB (moving up)  AOD : 10 kB  TAG : 100 B  Tier-0 : RAW, ESD, AOD, TAG  Tier-1 : ESD, AOD, TAG  Tier-2 : AOD, TAG  Might be different for the first year(s)

LHC Computing Resources 24 March U.S. ATLAS Model as example

LHC Computing Resources 24 March Data Grid Hierarchy Tier 1 FNAL/BNL T Tier 0 (CERN)

LHC Computing Resources 24 March ATLAS Milestones  2001 Number and places for Tier-1 centers should be known  2002 Basic world wide computing strategy should be defined  2003 Typical sizes for Tier-0 and Tier-1 centers should be proposed  2003 The role of Tier-2 centers in the GRID should be known

LHC Computing Resources 24 March Facilities Architecture : USA as Example  US ATLAS Tier-1 Computing Center at BNL  National in scope at ~20% of Tier-0 (see notes at end)  US ATLAS Tier-2 Computing Centers  Regional in scope at ~20% of Tier-1  Likely one of them at CERN  US ATLAS Institutional Computing Facilities  US ATLAS Individual Desk Top Systems

LHC Computing Resources 24 March U.S. ATLAS as example  Total US ATLAS facilities in ‘05 should include...  10,000 SPECint95 for Re-reconstruction  85,000 SPECint95 for Analysis  35,000 SPECint95 for Simulation  190 TBytes/year of On-line (Disk) Storage  300 TBytes/year of Near-line (Robotic Tape) Storage  Dedicated OC Mbit/sec Tier-1 connectivity to each Tier-2  Dedicated OC Mbit/sec to CERN

LHC Computing Resources 24 March US ATLAS: Integrated Capacities by Year

LHC Computing Resources 24 March Muon Level 2 Trigger Radius of curvature map for muons.

LHC Computing Resources 24 March Neutron Background Studies Total neutron flux KHz/cm 2

LHC Computing Resources 24 March Resource Estimates for 1 st Year  Assumptions  100 Hz event rate  2 passes through reconstruction  Low luminosity running (1.0E+33)  Two pass calibration  2000 Costing and Moore’s law adjusted  Note: Some estimates are “bottom – up” using ATLAS Physics TDR numbers.

LHC Computing Resources 24 March ATLAS and the RC Hierarchy  Intentions of setting up a local Tier-1 have been expressed already in :  Canada (ATLAS,Tier-1/2)  France (LHC),  Germany (LHC or multinational? at CERN),  Italy (ATLAS?),  Japan (ATLAS,Tier-1/2),  Netherlands (LHC)  Russia (LHC),  UK (LHC),  USA (ATLAS)

LHC Computing Resources 24 March CTP Estimate :Tier-1 Center  Tier-1 RC should have at startup (at least)  30,000 SPECint95 for Analysis  20,000 SPECint95 for Simulation  100 TBytes/year of On-line (Disk) Storage  200 TBytes/year of Near-line (Mass) Storage  100 Mbit/sec connectivity to CERN  Assume no major raw data processing or handling outside of CERN  Re-reconstruction partially in RC´s

LHC Computing Resources 24 March Calibration Assumptions Muon system – 100 Hz of “autocalibration” data 200 SI95/event 2 nd pass=20 Hz for alignment Inner Detector – 10 Hz, 1 SI95 for calibration (muon tracks) 2 nd pass =alignment EM Cal – 0.2 Hz, 10 SI 95/event – Z->e+e- 2 nd pass=repeat analysis Had. Cal – 1 Hz, 100 SI95 (isolated tracks) 2 nd pass = repeat, with found tracks

LHC Computing Resources 24 March Calibration Numbers  CPU: 24,000 SI95 Required  Data storage: 1.3 PB (assuming one stores data from this pass – fed into raw data store)

LHC Computing Resources 24 March Reconstruction  Two passes  Breakdown by system  Muon: 200 SI95  Had+EM Cal. :10 SI95  Inner Detector: 100 SI 95  NB: At high luminosity ID numbers may rise drastically. Numbers may vary substantially by 2006  Total CPU: 64,000 SI95 (Robertson: 65,000)  Robotic Store: 2 PB  Reprocessing: 128K SI95 (1 per 3 months)

LHC Computing Resources 24 March Generation and Simulation  “Astrophysical” uncertainties  Highly model dependent – scale of G4 activities vs. fast simulation (CDF vs. D0 models)  Assume 1% of total data volume is simulated via G4  3000 SI95/event  Data store 10 TB  Remainder (10x) via fast simulation  30(?) TB, negligible CPU

LHC Computing Resources 24 March Analysis  130,000 SI95 from ATLAS CTP  MONARC has pushed this number up  Depends strongly on assumptions  Example: U.S. Estimate = 85K SI95, which would suggest a minimum of 500K SI95 for ATLAS, but large uncertainties  300 TB storage/regional center

LHC Computing Resources 24 March Resources  CERN:  Raw data store  2 passes of reconstruction  Calibration  Reprocessing  Assume analysis/etc. part of contributions (e.g. RC at CERN)  Tier-1’s  Each has 20% of CERN capacity in CPU/Tape/Disk (reconstruction…)  Monte Carlo, Calibration and analysis  Costing via 2000 prices, Moore’s law (1.4/year CPU, 1.18/year tape, 1.35/year disk)

LHC Computing Resources 24 March CPU  CERN: 216,000 SI95 Calibration, reconstruction, reprocessing only  Single Tier 1: 130k SI95 (U.S. Example)  Total: 1,500 kSI95  NB. Uncertainties in analysis model, reprocessing times can dominate estimates.

LHC Computing Resources 24 March Data Storage  Tape  CERN: 2 PB( was 1 PB in TP)  Each Tier 1: 400 TB (U.S. Est)  Total: 4.0 PB  Single Tier 1: 400 TB

LHC Computing Resources 24 March Disk Storage  More uncertainty: usage of compressed data,etc  Figure of merit: 25% of Robotic tape  540 TB at CERN  100 TB in ATLAS Computing TP  U.S. Estimate: 100 TB  Sum of CERN+ Tier 1’s : 1,540 TB

LHC Computing Resources 24 March Multipliers  CPU:  2000: 70 CHF/SI95, 10 factor from Moore  Robotic Tape:  2000: 2700 CHF/TB, 2.5 factor from Moore  Disk:  2000: 50,000/TB, 5 from Moore  Networking:  20% of sum of other hardware costs Decent “rule of thumb”

LHC Computing Resources 24 March Costs  CPU:  CERN: 15 MCHF  Total: 106 MCHF (Tier 1’s+CERN)  Tape:  CERN: 5.4 MCHF  Total: 11 MCHF  Disk:  CERN: 27 MCHF  Total: 77 MCHF Networking: 37 MCHF

LHC Computing Resources 24 March Moore’s Law  CPU:  CERN: 2 MCHF  Total: 11 MCHF (Tier 1’s+CERN)  Tape:  CERN: 2.2 MCHF  Total: 4.3 MCHF  Disk:  CERN: 1.9 MCHF  Total: 5.5 MCHF Networking: 7.1 MCHF Comment: Cannot buy everything at last moment

LHC Computing Resources 24 March Commentary  Comparisons: ATLAS TP, Robertson  Unit costs show wide variation (unit cost of SI95 now, robotic tape, disk)  Moore’s law – varying assumptions  Requirements can have large variations  ATLAS, CMS, MONARC etc.  One should not take these as cast in stone – variations in ATLAS for  CPU/event  Monte Carlo methodology  Analysis models  Nonetheless – this serves as a starting point.