Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 1 LHCb Processing requirements Focus on the first year of data-taking Report to.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
1 Timescales Construction LCR MICE Beam Monitoring counters + DAQ My understanding Jan 07 Feb 07 Mar 07 Apr 07 May 07 Jun 07 Jul 07 Aug 07 Sep 07 ISIS.
CMS Alignment and Calibration Yuriy Pakhotin on behalf of CMS Collaboration.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Planning LHCb computing infrastructure 22 May 2000 Slide 1 Planning LHCb computing infrastructure at CERN and at regional centres F. Harris.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
LHCb computing in Russia Ivan Korolko (ITEP Moscow) Russia-CERN JWGC, October 2005.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
ATLAS Computing Model – US Research Program Manpower J. Shank N.A. ATLAS Physics Workshop Tucson, AZ 21 Dec., 2004.
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
Meeting, 5/12/06 CMS T1/T2 Estimates à CMS perspective: n Part of a wider process of resource estimation n Top-down Computing.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
1 LHCC RRB SG 16 Sep P. Vande Vyvre CERN-PH On-line Computing M&O LHCC RRB SG 16 Sep 2004 P. Vande Vyvre CERN/PH for 4 LHC DAQ project leaders.
LHCb Computing Model and updated requirements John Harvey.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
The CMS Computing System: getting ready for Data Analysis Matthias Kasemann CERN/DESY.
5 June 2003Alan Norton / Focus / EP Topics1 Other EP Topics Some 2003 Running Experiments - NA48/2 (Flavio Marchetto) - Compass (Benigno Gobbo) - NA60.
ATLAS Grid Computing Rob Gardner University of Chicago ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science THE CENTER.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
David Stickland CMS Core Software and Computing
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
ALICE Computing Model A pictorial guide. ALICE Computing Model External T1 CERN T0 During pp run i (7 months): P2: data taking T0: first reconstruction.
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
ATLAS Computing: Experience from first data processing and analysis Workshop TYL’10.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
The COMPASS event store in 2002
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
ALICE Computing Upgrade Predrag Buncic
LHC Collisions.
R. Graciani for LHCb Mumbay, Feb 2006
ATLAS DC2 & Continuous production
The ATLAS Computing Model
LHCb thinking on Regional Centres and Related activities (GRIDs)
MC production plans : 1/08/ /03/2001
Development of LHCb Computing Model F Harris
Presentation transcript:

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 1 LHCb Processing requirements Focus on the first year of data-taking Report to Management and Resources Panel 24 March 2000 J.Harvey / CERN

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 2 Talk Outline qDataflow model qData processing and storage requirements qBaseline computing model qData processing at CERN qThe first 6 months - calibration needs qImpact on processing requirements qManpower qConclusions

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 3 General Comments qLHCb Technical Notes in preparation ãLHCb answers to the SPP questions ãBaseline Model of LHCb’s distributed computing facilities qBaseline model reflects current thinking ãbased on what seems most appropriate technically ãdiscussions are just starting qOpen meeting of the collaboration April 5-7 ãfeedback and changes can be expected

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 4 Dataflow Model RAW Data DAQ system L2/L3 Trigger Calibration Data Reconstruction Event Summary Data (ESD) Reconstruction Tags Detector RAW Tags L3YES, sample L2/L3NO ESD Reconstruction TagsAnalysis Object Data (AOD)Physics Tags First Pass Analysis Physics Analysis Private Data Analysis Workstation Physics results ESD RAW

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 5 Real Data Processing Requirements

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 6 Simulation Requirements

CPU for production Mass Storage for RAW, ESD AOD, and TAG Institute Selected User Analyses Institute Selected User Analyses Regional Centre User analysis Production Centre Generate raw data Reconstruction First pass analysis User analysis Regional Centre User analysis Regional Centre User analysis Institute Selected User Analyses Regional Centre User analysis Institute Selected User Analyses CPU for analysis Mass storage for AOD, TAG CPU and data servers AOD,TAG real : 80TB/yr sim: 120TB/yr AOD,TAG 8-12 TB/yr

Production Centre (x1) Regional Centre (~x5) Institute (~x50) Real DataSimulated Data Data collection Triggering Reconstruction Final State Reconstruction User Analysis CERN Output to each RC: AOD and TAG datasets 20TB x 4 times/yr= 80TB/yr User Analysis Output to each Institute: AOD and TAG for samples 1TB x 10 times/yr= 10TB/yr e.g. RAL Event Generation GEANT tracking Reconstruction Final State Reconstruction User Analysis Output to each RC: AOD, Generator and TAG datasets 30TB x 4 times/yr= 120TB/yr User Analysis Selected User Analysis Output to each institute: AOD and TAG for samples 3TB x 10 times/yr= 30TB/yr RAL, Lyon,...CERN, Lyon, ….

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 9 Compute Facilities at CERN DAQ / Calibration L2/L3 trigger processing Reconstruction Re-reconstruction (2-3 times per year) CDR 80 Gb/s Experiment - LHC Pit 8 CERN Computer Centre Temporary store (10 TB) for Raw Data and ESD Calibration (5TB) Data Storage and Analysis Physics Analysis MSS Readout Network Data Taking 200 Hz Raw data 20 MB/s ESD data 20 MB/s SHUTDOWN 400 Hz Raw data 60 MB/s ESD Data 60 MB/s disk RAW ESD AOD TAG Data and CPU server CPU farm AOD TAG to Regional Centres

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 10 Facility at Pit - Requirements

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 11 CERN Computer Centre Requirements

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 12 LHCb - the first 6 months qAssumptions ãwe will get our full nominal luminosity (x1) ãthe LHC duty cycle will be lower - assume 25% (x 0.5) ãdatataking will start end June - assume 60 days in 2005 (x0.5) qFactors ãcommission high level triggers ãcommission reconstruction software ãunderstand calibration and alignment qConsequences ãstream of “pass-all” events to study trigger candidates (x1) ãas trigger improves b sample will get richer ãlots of re-reconstruction (x 2)

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 13 Calibration qDecide what a run is... ã10 7 events per day ã2 x 10 hour fills per day ã events per fill ã5 runs per fill (~2 hrs - data quality) ã10 6 events per run or 100 GB qCalibration ãVELO - 1 short run (5 mins) at SOR for alignment ãRICH - P,T changes can be detected and corrected in real time ãPASS 0 correct for defects ãQuality - identify all ‘dead wires’ etc and input at start of next run Calibration Cycle Raw DataCalibration Data Reconstructio n ESD Quality SOR EOR Pass 0

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 14 Calibration - Alignment 1. Start from survey (Following steps use real data) 2. Alignment and calibration of trackers ãvertex, inner, outer, muon 3. Cross alignment between different trackers ãvertex, inner, outer, muon 4. Alignment and calibration of other detectors with well- measured tracks ãcalorimeters and RICH detectors qWithin 1 month of start of datataking we expect to have good understanding of alignment and calibration of the detector

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 15 Calibration - Alignment qWill need intensive effort to understand strange alignment effects qRun after reconstruction - use reconstructed track information (“second order calibration”) qNot CPU intensive - ~25 SI95sec / event qEstimate we require 10 6 events (2 hours of datataking) qRepeat alignment after interventions on the detector ã2-3 times per year qRe-process whole data sample after new alignment completed ãestimate ~ 5-6 complete re-processings in first year instead of 2-3 (x2 increase)

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 16 Simple Analyses qMust be ready to do interesting physics measurements since we run with full luminosity  B d -> J/  K s is straightforward ãcan be studied “online” as data are recorded ãresults compared with measurements from BaBar and BELLE qOther channels need very good understanding of the detector ãB s -> D s K needs excellent understanding of vertex detector ãresults may not come promptly with the data

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 17 Impact on CPU needs qHigh Level Triggers ãFull luminosity implies full L2/L3 capacity needed from day 1 ãcorresponds to SI95 (L2), SI95 (L3) qReconstruction : ã0.5 (duty cycle ) x 0.5 (days) x 2 (reprocessings) = 0.5 ãimplement half reconstruction capacity in first year ãcorresponds to SI95 ãinstall full capacity for second year ãbenefit from improvements in performance ãNB farm will be a heterogeneous system

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 18 Impact on CPU needs qAnalysis ãreduced event sample (x 0.25) ãintensive development of analysis algorithms ãextra load at CERN in the first months before distributed computing model is fully operational ãincentive to turn around the full analysis quickly (~2 days) ãassume need full analysis power available from day 1 ãcorresponds to 20,000 SI95 ãrepeat analysis every week - 10 reprocessings å25 TB of data if all are kept

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 19 Costs qCPU Power ã80,000 SI95 ã12 SFr / SI95 ã1 million SFr qData storage ã events ã25 TB RAW + 25TB ESD + 25 TB AOD = 75 TB ãTape SFr / GB ~40,000 SFr ãDisk ~3 SFr / GB x 40 GB (active) = ~120,000 SFr

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 20 Assignment of responsibility qUnderstood in LHCb institutes building subdetectors also take responsibility for development and maintenance of software qThe detector TDRs are in preparation now qMOU after TDRs

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 21 TDR Schedule qMagnet Dec 1999 qVertex Apr 2001 qITR Sep 2001 qOTR Mar 2001 qRICH Jun 2000 qMuon Jan 2001 qCalorimeter Jul 2000 qTriggerL0/L1 Jan 2002 qDAQ Jan 2002 qComputing Jul 2002

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 22 Manpower for Software ActivityNeedHaveMiss Type Software frameworks E/P Software support 5 2 3E Muon software P/E Muon trigger P Tracking P Vertex 6 P/E Trigger (L0,L1,L2,L3) E/P Calorimeter (ECAL,HCAL,PREShower) P RICH Total Flat profile in time. Missing manpower needs to be acquired now

Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 23 Manpower for facilities and operations Project Leader1 FTE Compute farm2 FTE Network0.5 FTE Storage, media, bookkeeping0.5 FTE Servers, desktop, OS 2 FTE ãuse services of outsource contract Control room infrastructure1 FTE Utilities for day to day operation0.5 FTE Training shift crews, documentation0.5 FTE Link person for LHC machine1 FTE Total9 FTEs Time profile - 3 FTEs by 2002, remainder by 2004