16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
Sue Foffano LCG Resource Manager WLCG – Resources & Accounting LHCC Comprehensive Review November, 2007 LCG.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
Project Status Report Ian Bird Computing Resource Review Board 30 th October 2012 CERN-RRB
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
12 Dec 2005 J. Schukraft1 ALICE USA ALICE position towards US participation EU participation in emcal Requirements Formal steps & schedule.
Resources and Financial Plan Sue Foffano WLCG Resource Manager C-RRB Meeting, 12 th October 2010.
Task 6.1 Installing and testing components of the LCG infrastructure to achieve full-scale functionality CERN-INTAS , 25 June, 2006, Dubna V.A.
October 2010CERN-Korea J. Schukraft1 ALICE Status 8 th CERN-Korea meeting Collaboration News Korean participation General ALICE status report on Wednesday.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ICHEP06, 29 July 2006, Moscow RDIG The Russian Grid for LHC physics analysis V.A. Ilyin, SINP MSU V.V. Korenkov, JINR A.A. Soldatov, RRC KI LCG.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
Dan Tovey, University of Sheffield User Board Overview Dan Tovey University Of Sheffield.
Draft Budget for M&O 2007 A. Petrilli, RRB-23, October 24, 2006 CERN-RRB Cf. CERN-RRB
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
March 16,2005 LHC GDB Meeting (Lyon) L. Pinsky--ALICE-USA1 ALICE-USA Grid-Deployment Plans Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
Summary of Services for the MC Production Patricia Méndez Lorenzo WLCG T2 Workshop CERN, 12 th June 2006.
ALICE-USA Computing January 21, 2014 CERN L. Pinsky--University of Houston1 University of Houston Computing Support Status & Future Possibilities for ALICE.
Procedure to follow for proposed new Tier 1 sites Ian Bird CERN, 27 th March 2012.
The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.
Procedure for proposed new Tier 1 sites Ian Bird WLCG Overview Board CERN, 9 th March 2012.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Production Activities and Results by ALICE Patricia Méndez Lorenzo (on behalf of the ALICE Collaboration) Service Challenge Technical Meeting CERN, 15.
ATLAS Computing Requirements LHCC - 19 March ATLAS Computing Requirements for 2007 and beyond.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
Predrag Buncic CERN Future of the Offline. Data Preparation Group.
14/03/2007A.Minaenko1 ATLAS computing in Russia A.Minaenko Institute for High Energy Physics, Protvino JWGC meeting 14/03/07.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
LHC Computing, SPC-FC-CC-C; H F Hoffmann1 CERN/2379/Rev: Proposal for building the LHC computing environment at CERN (Phase 1) Goals of Phase.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
Status of AliEn2 Services ALICE offline week Latchezar Betev Geneva, June 01, 2005.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
CERN-RRB ALICE Resources Review Board 30 April 2014 Updates to Financial Report CERN Finance, Procurement and Knowledge Transfer Department.
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Availability of ALICE Grid resources in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
LHC Computing Grid Status of Resources Financial Plan and Sue Foffano
Data Challenge with the Grid in ATLAS
Resources and Financial Plan
Russian Regional Center for LHC Data Analysis
30th meeting of the ALICE RRB C. Decosse
LHCb computing in Russia
RDIG for ALICE today and in future
ALICE Resources Review Board CERN-RRB April 2017
New strategies of the LHC experiments to meet
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

16 October 2005 Collaboration Meeting1 Computing Issues & Status L. Pinsky Computing Coordinator ALICE-USA

16 October 2005 Collaboration Meeting2 Overview of the ALICE Computing Plan Some Raw Numbers (All from TDR) –Recording Rate: 100 Hz of MB/Event Raw w/reconstruction 3.8 PB/Yr (Pb-Pb Only) 2 Copies—1 FULL CERN …& 1 Net Distributed (Working) Copy –Simulated Data: 3.9 PB/Yr (Pb-Pb Only) –Event Summary Data (ESD): 3.03 MB/Event 2 Net Copies—1 CERN …& 1 Net Distributed (Working) Copy –(Physics) Analysis Object Data (AOD): ~0.333 MB/Event Multiple local working copies Size varies depending upon application… Archival copies of CERN

16 October 2005 Collaboration Meeting3 Embedded Aside Comment on Trigger Note that an “ESD”-like file will be saved from ALL High Level Trigger processed events (but the full raw data will be saved only for the “selected” events). We should think about what information we want preserved in these “HLT- ESD’s” from the Jet-Finding exercises…

16 October 2005 Collaboration Meeting4 ALICE Computing Resources Needed TOTALS:Pb-Pb & p-p (~10% of Pb-Pb) –CPU: 35 MSi2K 8.3 CERN & 26.7 MSi2K Distributed –Working Disk Storage: 14 PB ~1.5 CERN & 12.5 PB Distributed –Mass Storage: 11 PB/Year ~3.6 CERN & 7.4 PB/Yr Distributed

16 October 2005 Collaboration Meeting5 ALICE “Cloud” Computing Model Tier-0 CERN –HLT + First Reconstruction (?) + Archive 7 Tier-1 Facilities CERN) –Mass Storage for Distributed (Working) Copies of Raw Data & ESD –Tasks—Reconstruction, ESD and AOD Creation –Tier-1’s are a NET Capability and may themselves be distributed (Cloud Model) N Tier-2 Facilities –CPU and Working Disk w/no Mass Storage –Focus mostly on simulation and AOD analysis

1o October Board6 Computing Resources Status before MoU approval by RRB

1o October Board7 Sites (> 10%) Sites which have pledged resources to ALICE and which corresponding Funding Agencies will sign the LCG MoU –6 sites with Tier1 services: CAF (CERN), IN2P3-Lyon (France), INFN-CNAF (Italy), GridKa (Germany), UK Tier1 (UK), NL Tier1 (The Netherlands) –9 sites with Tier2 services: INFN Tier2 Federation (Italy), FZU AS Prague (Czech Rep.), RDIG (Russia), VECC/SINP Kolkata (India), UK Tier2 Federations, French Tier-2 Federation (France), CCIN2P3 (France), GSI (Germany), Polish Tier-2 Federation (Poland). Sites which have pledged resources to ALICE and are NOT part of the LCG MoU –4 sites with Tier2 services: Cape Town (South Africa), Korean Federation (Korea), Wuhan (China), OSC (USA) –3 sites with Tier3 services: Bucharest, Muenster, Slovakia Future sites –1 site with Tier1 services: LBNL (USA) –1 site with Tier2 services: US Tier2 Federation (USA)

1o October Board8 Present status of pledged resources Tier 0 at CERN –ALICE requirements satisfied, including peak for first pass reconstruction Tier 1 and Tier 2 –As declared to LCG and presented to RRB –CAF not included

1o October Board9 What can we expect more Tier 1 and Tier 2 –As declared to LCG and presented to RRB –CAF included, assume fully funding (45% so far) and equal sharing among experiments –Include Tier2 not signing the MoU and the future US contribution

1o October Board10 Remarks Information: –mismatch between what I get from you and what is reported by LCG: NDGF, RDIG, INFN T2, Poland (!)

1o October Board11 We have a problem! Solutions? –Ask additional resources to main Funding Agencies … who already have pledged most of the resources –Ask main resources providers to re-consider the sharing algorithm among the LHC experiments ALICE produces same quantity of data than ATLAS and CMS and much more than LHCb ALICE computing/physicist twice as much expensive as for the other experiments –Ask all collaborating institutes (or at least those who provide nothing or only a small fraction) to provide a share of computing resources following M&O sharing mechanism –Have very selective triggers and take less data or analyze all data later waiting for better times

16 October 2005 Collaboration Meeting12 “Nominal” US Contribution (From TDR) 1/7 of the External Distributed Resources— Includes combined Tier-1 & Tier 2 assets –CPU — 3.44 MSi2K (net total) –Disk Storage — 1.26 PB (net total) –Mass Storage— 0.94 PB (by 2010 & then annually) Acquisition Schedule… –20% by early 2008 –Additional 20% added by early 2009 –Final 60% on-line by early 2010

16 October 2005 Collaboration Meeting13 “Nominal” ALICE-USA Allocation Planned allocation: –1/3 each for OSC, UH and NERSC (+Livermore?) CPU = 1147 KSi2K Disk = 420 TB Mass Storage = 0.32 PB Estimated Costs (Including Manpower) –NERSC — $886K (Doug Olson-Feb. 2005) –OSC — $750K (NSF Proposal-Oct. 2005) –UH — $ 1050K (NSF Proposal-Oct. 2005)

1o October Board14 NSF Proposal Contents & Status Proposal has been submitted! …Walks a fine line between stand-alone & ALICE/USA & EMCAL support Argues that it will sustain a significant US stand-alone contribution (buy-in) even if EMCAL is not built. Uses “Cloud” Model as distinguishing Niche… Asks for 2/3 of the total “Nominal” US Contribution… –Equal portions for OSC & UH –Announcement date ~ 6 months (Mar.-Apr. 2006)

16 October 2005 Collaboration Meeting15 Minimum ALICE-USA Computing Resources For December Review Exercize: –Plan A (Stay Within the ALICE Computing Model)—Sub-options Plan A.1 = $0 Plan A.2 = Arbitrary total with arbitrary DOE Contribution… Plan A.3 = DOE contributes NERSC Request. –Plan B (Separate ALICE-USA Resources) Plan B.1 = Arbitrary total from DOE Plan B.2 = Use existing Contributions and nothing new from DOE (i.e. DOE = $0)

16 October 2005 Collaboration Meeting16 My Suggestion for the DOE Review Chose Plan A.1 ( $0 from DOE for computing resources) –Stay within the ALICE Computing Model! –The overall computing needs of ALICE are a global ALICE Collaboration issue and NOT an isolated ALICE-USA issue. –Give absolute priority to building EMCAL modules, and do not spend anything from the scarce DOE funding on computing… Once EMCAL is built, ALL ALICE-USA collaborators will have equal access to the overall ALICE DISTRIBUTED computing resources If they are insufficient, we will share the burden with all of our ALICE colleagues to find the missing computing capability, but there is NO minimum that we MUST supply, or indeed NEED to have to participate within the ALICE Computing Model.