Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 (2 10 33 )

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
Stripping Plans for 2014 and 2015 Laurence Carson (Edinburgh), Stefano Perazzini (Bologna) 2 nd LHCb Computing Workshop,
CMS Alignment and Calibration Yuriy Pakhotin on behalf of CMS Collaboration.
LHCb Quarterly Report October Core Software (Gaudi) m Stable version was ready for 2008 data taking o Gaudi based on latest LCG 55a o Applications.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Experience with analysis of TPC data Marian Ivanov.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Bookkeeping Tutorial. Bookkeeping & Monitoring Tutorial2 Bookkeeping content  Contains records of all “jobs” and all “files” that are created by production.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
Event Data History David Adams BNL Atlas Software Week December 2001.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
LHCb The LHCb Data Management System Philippe Charpentier CERN On behalf of the LHCb Collaboration.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Status report from the Deferred Trigger Study Group John Baines, Giovanna Lehmann Miotto, Wainer Vandelli, Werner Wiedenmann, Eric Torrence, Armin Nairz.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Michele de Gruttola 2008 Report: Online to Offline tool for non event data data transferring using database.
Bookkeeping Tutorial. 2 Bookkeeping content  Contains records of all “jobs” and all “files” that are produced by production jobs  Job:  In fact technically.
Status report of the KLOE offline G. Venanzoni – LNF LNF Scientific Committee Frascati, 9 November 2004.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
LHCbComputing Resources requests : changes since LHCb-PUB (March 2013) m Assume no further reprocessing of Run I data o (In.
Data Placement Intro Dirk Duellmann WLCG TEG Workshop Amsterdam 24. Jan 2012.
LHCbComputing Lessons learnt from Run I. LHCbComputing Lessons learnt from Run I Growing pains of a lively teenager.
L. Betev, D. Cameron, S. Campana, P. Charpentier, D. Duelmann, A. Filipcic, A. Di Girolamo, N. Magini, C. Wissing Summary of the Experiments Data Management.
LHCb report to LHCC and C-RSG Philippe Charpentier CERN on behalf of LHCb.
LHCb Readiness for Run WLCG Workshop Okinawa
OPERATIONS REPORT JUNE – SEPTEMBER 2015 Stefan Roiser CERN.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.
Alignment in real-time in current detector and upgrade 6th LHCb Computing Workshop 18 November 2015 Beat Jost / Cern.
LHCbDirac and Core Software. LHCbDirac and Core SW Core Software workshop, PhC2 Running Gaudi Applications on the Grid m Application deployment o CVMFS.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
Overview of PHENIX Muon Tracker Data Analysis PHENIX Muon Tracker Muon Tracker Software Muon Tracker Database Muon Event Display Performance Muon Reconstruction.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
Big picture: What’re the sub-topics of the software framework? What’s the relationship of them? How to arrange data pipe? 1.
Computing Model José M. Hernández CIEMAT, Madrid On behalf of the CMS Collaboration XV International Conference on Computing in High Energy and Nuclear.
16 September 2014 Ian Bird; SPC1. General ALICE and LHCb detector upgrades during LS2  Plans for changing computing strategies more advanced CMS and.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Status Report on Data Reconstruction May 2002 C.Bloise Results of the study of the reconstructed events in year 2001 Data Reprocessing in Y2002 DST production.
LHCbComputing LHCb computing model in Run1 & Run2 Concezio Bozzi Bologna, Feb 19 th 2015.
LHCb LHCb GRID SOLUTION TM Recent and planned changes to the LHCb computing model Marco Cattaneo, Philippe Charpentier, Peter Clarke, Stefan Roiser.
1 P. Murat, Mini-review of the CDF Computing Plan 2006, 2005/10/18 An Update to the CDF Offline Plan and FY2006 Budget ● Outline: – CDF computing model.
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Data Formats and Impact on Federated Access
LHCb distributed computing during the LHC Runs 1,2 and 3
Overview of the Belle II computing
Calibrating ALICE.
LHCb Software & Computing Status
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Computing Infrastructure for DAQ, DM and SC
ILD Ichinoseki Meeting
R. Graciani for LHCb Mumbay, Feb 2006
DQM for the RPC subdetector
Use Of GAUDI framework in Online Environment
Development of LHCb Computing Model F Harris
Offline framework for conditions data
Presentation transcript:

Workflows and Data Management

Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( ) P Lumi levelling: higher pileup, from 1.1 to 5.5 o Trigger rate… x 5 (at least, dominated by charm physics) o RAW data size x 2 (pileup) o Online reconstruction = offline reconstruction P Allows direct analysis from online data (TURBO stream) d TURBO data format is directly analysis data (no RAW!) o Output from DAQ: P Any linear combination from TURBO data to full reconstruction output (reco + RAW) P Use year “n” data to tune TURBO for year “n+1” ! o Throughput between 6 and 10 GB/s (GPDs of today) o Trigger (SW only) == offline selection P Stripping and streaming are no longer effective (all events are for physics!) 2

Workflow and DM Online Calibration & Alignment m Novel concept of detector alignment & calibration done in between the two stages of HLT processing o Successfully exercised in 2015 o Part of online reconstructed events immediately available for user analysis o Enabling HLT2 processing for better signal yield o Same constants used for offline processing o Concept will be further exploited in Run 3 2 Feb '163

Workflow and DM Trains and indices m Using event indices for analysis o Replace “stripping + streaming” with “selection + indexation” P Because stripping retention will be high (more selective trigger) o Event set query to central (or local) index P Download a local event collection (i.e. direct access addresses) o Random access to local or remote data P Using a local replica catalog (Gaudi Federation) m R&D can start now (2016/17) for: o Setting up train analyses P framework similar to stripping o Data indexing P Select technology (central vs distributed, DB vs files) P Index content to be defined P Event set queries to be defined for jobs o Optimizing random access through ROOT m Not to forget for analysis data access: o Network bandwidth is not all: disk spindles is equally important 4

Workflow and DM Analysis job using event index 5 Central Event Index Application Local Event Catalog Local Replica Catalog Replica Catalog Remote storage Local storage Event set query Job One size does NOT fit all: we will have different event formats for different analyses (microDST, DST, sth in between?)