LHCb datasets and processing stages. 200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

1 First Considerations on LNF Tier2 Activity E. Vilucchi January 2006.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Offline Software Status Jan. 30, 2009 David Lawrence JLab 1.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
KLOE Offline Status Report Data Reconstruction MonteCarlo updates MonteCarlo production Computing Farm C.Bloise – May 28th 2003.
BESIII computing 王贻芳. Peak Data volume/year Peak data rate at 3000 Hz Events/year: 1*10 10 Total data of BESIII is about 2*640 TB Event size(KB)Data volume(TB)
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
GLAST LAT ProjectDOE/NASA Baseline-Preliminary Design Review, January 8, 2002 K.Young 1 LAT Data Processing Facility Automatically process Level 0 data.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
1 Kittikul Kovitanggoon*, Burin Asavapibhop, Narumon Suwonjandee, Gurpreet Singh Chulalongkorn University, Thailand July 23, 2015 Workshop on e-Science.
Data Import Data Export Mass Storage & Disk Servers Database Servers Tapes Network from CERN Network from Tier 2 and simulation centers Physics Software.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
The Computing System for the Belle Experiment Ichiro Adachi KEK representing the Belle DST/MC production group CHEP03, La Jolla, California, USA March.
Modeling Regional Centers with MONARC Simulation Tools Modeling LHC Regional Centers with the MONARC Simulation Tools Irwin Gaines, FNAL for the MONARC.
Stephen Wolbers CHEP2000 February 7-11, 2000 Stephen Wolbers CHEP2000 February 7-11, 2000 CDF Farms Group: Jaroslav Antos, Antonio Chan, Paoti Chang, Yen-Chu.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Computing for LHCb-Italy Domenico Galli, Umberto Marconi and Vincenzo Vagnoni Genève, January 17, 2001.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Tier-2  Data Analysis  MC simulation  Import data from Tier-1 and export MC data CMS GRID COMPUTING AT THE SPANISH TIER-1 AND TIER-2 SITES P. Garcia-Abia.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
CMS Computing and Core-Software USCMS CB Riverside, May 19, 2001 David Stickland, Princeton University CMS Computing and Core-Software Deputy PM.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
Overview of LHCb Computing Requirements, Organisation, Planning, Resources, Strategy LHCb UK Computing Meeting RAL May 27/28, 1999 John Harvey CERN / EP-ALC.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
26 Nov 1999 F Harris LHCb computing workshop1 Development of LHCb Computing Model F Harris Overview of proposed workplan to produce ‘baseline computing.
Report to Worldwide Analysis Review Panel J. Harvey 23 March 2000 Slide 1 LHCb Processing requirements Focus on the first year of data-taking Report to.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
LHCb Computing Model and updated requirements John Harvey.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
NA62 computing resources update 1 Paolo Valente – INFN Roma Liverpool, Aug. 2013NA62 collaboration meeting.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
MC Production in Canada Pierre Savard University of Toronto and TRIUMF IFC Meeting October 2003.
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
LHCb Current Understanding of Italian Tier-n Centres Domenico Galli, Umberto Marconi Roma, January 23, 2001.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Southwest Tier 2.
The ATLAS Computing Model
Development of LHCb Computing Model F Harris
The LHCb Computing Data Challenge DC06
Presentation transcript:

LHCb datasets and processing stages

200 kB100 kB 70 kB 0.1 kB 10kB 150 kB 0.1 kB 200 Hz LHCb datasets and processing stages

EVTEC Slide 3J.Harvey, 10 Nov 1998 Data Rates Raw Reconstructed TagsAnalysis User AnalysisPhysics analysis 660 MB/s360 MB/s DAQReconstructionReprocessingSimulation 4 MB/s 20 MB/s 14 MB/s 20 MB/s14 MB/s35 MB/s DATA REPOSITORY

EVTEC Slide 4J.Harvey, 10 Nov 1998 Data Volumes Assume run for 10 7 secs each year Data type Rate /secVolume /year a) Raw data 20 MB/s 200 TB b) Interesting physics data 1 MB/s 10 TB c) Simulated data (bx10)10 MB/s 100 TB d) Reconstructed Raw14 MB/s 140 TB e) Reconstructed Sim 7 MB/s 70 TB f) Analysis data 4 MB/s 40 TB TOTAL 560 TB

EVTEC Slide 5J.Harvey, 10 Nov 1998 CPU Resources Assumption is that the cpu power / processor in 2004 will be ~4000 Mips CPU Resources at the experiment data production and triggering 1,400x1000 Mips 350 nodes reconstruction 800x1000 Mips 200 nodes Total 550 nodes Event simulation and reprocessing (4 month duty cycle) monte carlo production1,400x1000 Mips 350 nodes reprocessing, event tag creation 800x1000 Mips 200 nodes Total 550 nodes Physics Analysis physics production10 groups 10 7 kMs/refinement/week 46 nodes user analysis kMs/job *2 /week 91 nodes Total 137 nodes

EVTEC Slide 6J.Harvey, 10 Nov 1998 CPU Farms qSetup low maintenance commodity processor farms ãPCs running NT or Linux ãcentral server, rack-mounted screenless nodes qAutomated massive installation, upgrade, booting qRemote disk access by many satellites to central server qRemote management and monitoring ãcentralised error logs, alarms ãperformance ãnode failure detection and recovery qTools for messaging qProduction facilities - batch, scripts, bookkeeping, tape management Common LHC project - Filter Farms

EVTEC Slide 7J.Harvey, 10 Nov 1998 Data management Technology qDatabase Technology - ODBMS qMass Storage - HPSS Datasets qEvent repository qParameter descriptions - detector, qSecondary data - calibrations, alignment, output from quality checks

EVTEC Slide 8J.Harvey, 10 Nov 1998 Computing Model - CE(R)NTRIC

EVTEC Slide 9J.Harvey, 10 Nov 1998 Computing Model - DISTRIBUTED

EVTEC Slide 10J.Harvey, 10 Nov 1998 Computing Model - To be studied MONARC study access patterns simulation technology watch