Alignment in real-time in current detector and upgrade 6th LHCb Computing Workshop 18 November 2015 Beat Jost / Cern.

Slides:



Advertisements
Similar presentations
Lectures on File Management
Advertisements

The Control System for the ATLAS Pixel Detector
CWG10 Control, Configuration and Monitoring Status and plans for Control, Configuration and Monitoring 16 December 2014 ALICE O 2 Asian Workshop
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
HLT & Calibration.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/27 A Control Software for the ALICE High Level Trigger Timm.
1 HLT – ECS, DCS and DAQ interfaces Sebastian Bablok UiB.
16: Distributed Systems1 DISTRIBUTED SYSTEM STRUCTURES NETWORK OPERATING SYSTEMS The users are aware of the physical structure of the network. Each site.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
DATA PRESERVATION IN ALICE FEDERICO CARMINATI. MOTIVATION ALICE is a 150 M CHF investment by a large scientific community The ALICE data is unique and.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
Framework for Online Alignment 4th LHCb Computing Workshop 6 November 2014 Beat Jost / Cern.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Online Calibration of the D0 Vertex Detector Initialization Procedure and Database Usage Harald Fox D0 Experiment Northwestern University.
Real data reconstruction A. De Caro (University and INFN of Salerno) CERN Building 29, December 9th, 2009ALICE TOF General meeting.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
Databases E. Leonardi, P. Valente. Conditions DB Conditions=Dynamic parameters non-event time-varying Conditions database (CondDB) General definition:
The huge amount of resources available in the Grids, and the necessity to have the most up-to-date experimental software deployed in all the sites within.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
F. Rademakers - CERN/EPLinux Certification - FOCUS Linux Certification Fons Rademakers.
G.Corti, P.Robbe LHCb Software Week - 19 June 2009 FSR in Gauss: Generator’s statistics - What type of object is going in the FSR ? - How are the objects.
Grand Challenge and PHENIX Report post-MDC2 studies of GC software –feasibility for day-1 expectations of data model –simple robustness tests –Comparisons.
The Alternative Larry Moore. 5 Nodes and Variant Input File Sizes Hadoop Alternative.
A Technical Validation Module for the offline Auger-Lecce, 17 September 2009  Design  The SValidStore Module  Example  Scripting  Status.
Navigation Timing Studies of the ATLAS High-Level Trigger Andrew Lowe Royal Holloway, University of London.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
PWG3 Analysis: status, experience, requests Andrea Dainese on behalf of PWG3 ALICE Offline Week, CERN, Andrea Dainese 1.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Online System Status LHCb Week Beat Jost / Cern 9 June 2015.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
CIS250 OPERATING SYSTEMS Chapter One Introduction.
LHCb Lausanne Workshop, 21st March /8 Tracking Open Issues E. Rodrigues, NIKHEF LHCb Tracking and Alignment Workshop Some topics to discuss …
Pixel DQM Status R.Casagrande, P.Merkel, J.Zablocki (Purdue University) D.Duggan, D.Hidas, K.Rose (Rutgers University) L.Wehrli (ETH Zuerich) A.York (University.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
Computing for Alice at GSI (Proposal) (Marian Ivanov)
Examine Overview D0 Online Workshop June 3, 1999 Jae Yu Outline 1. What is an Examine? 2. How Many Examines? 3. How does it work? 4. What are the features?
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
Summary of Workshop on Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Computing Day February 28 th 2005.
TRIUMF HLA Development High Level Applications Perform tasks of accelerator and beam control at control- room level, directly interfacing with operators.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
PCIe40 — a Tell40 implementation on PCIexpress Beat Jost DAQ Mini Workshop 27 May 2013.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
Calibration & Monitoring M.N Minard Monitoring News Status of monitoring tools Histogramm and monitoring meeting 6/02/08 Calibration farm brainstorming.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
LHCb 2009-Q4 report Q4 report LHCb 2009-Q4 report, PhC2 Activities in 2009-Q4 m Core Software o Stable versions of Gaudi and LCG-AA m Applications.
LHCb Alignment Strategy 26 th September 2007 S. Viret 1. Introduction 2. The alignment challenge 3. Conclusions.
AliRoot survey: Calibration P.Hristov 11/06/2013.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
DB and Information Flow Issues ● Selecting types of run ● L1Calo databases ● Archiving run parameters ● Tools Murrough Landon 28 April 2009.
Online – Data Storage and Processing
Workshop Concluding Remarks
Calibrating ALICE.
Controlling a large CPU farm using industrial tools
HLT & Calibration.
Applied Software Implementation & Testing
LHCb Alignment Strategy
Use Of GAUDI framework in Online Environment
Offline framework for conditions data
Presentation transcript:

Alignment in real-time in current detector and upgrade 6th LHCb Computing Workshop 18 November 2015 Beat Jost / Cern

Beat Jost, Cern Preamble ❏ I kept the title that was in the agenda, HOWEVER ➢ The alignment procedure described is NOT Real-Time, but rather (bicycle) Online ➢ I reckon, in the future, this will not drastically change 2 6th LHCb Computing Workshop, 18 November 2015

Beat Jost, Cern General Idea 3 6th LHCb Computing Workshop, 18 November 2015 ❏ Use the HLT farm nodes to process (selected) event data (Analysers): ~1800 copies ❏ The results of the event analysis are collected by an Iterator (singleton) that will provide the next set of parameters the analysers will use to redo the event processing ❏ Once the iteration process has converged (or failed) the Alignment process is finished and the final parameter set is stored for future use ❏ The entire process is driven and steered by the run controller via the FSM machinery

Beat Jost, Cern Pictorially 4 6th LHCb Computing Workshop, 18 November 2015 Iterator (Central Node) Analyzer (Farm Node) Analyzer (Farm Node) Alignment Constants Analysis Results … Alignment Constants ~1800 Copies Event Data

Beat Jost, Cern Task Finite State Machine ❏ Standard Online Task FSM ❏ Each task in the system runs this FSM ❏ The sequencing of the different tasks is performed by the run controller and its internal rules 5 6th LHCb Computing Workshop, 18 November 2015 Offline Ready RunningPaused configure (initialize) start pause continue stop reset(finalize)

Beat Jost, Cern Global FSM and sequencing 6 6th LHCb Computing Workshop, 18 November 2015 ❏ The Analyzers issue the pause transition when they have finished the processing of the events (EoF). ❏ The run controller only send the stop command to the Analysers when ALL are paused. During this transition the Analysers ‘publish’ the results of the analysis. ❏ When all Analysers are in the ready state the run controller sends the pause command to the Iterator, which collects the results of the Analysers, calculates the next set of parameters and issues the continue command to the run controller. ❏ The run controller will issue the start command to the Analysers, which will read the new parameters and analyse the data again ❏ And so on… Slave initiated

Beat Jost, Cern Framework Components ❏ The Tasks (Iterator and Analyser) are each composed of two components ➢ A framework service that ensures proper interaction with the run controller ➢ A user component (AlgTool) that implements the real work of the alignment process ❏ Iterator: ➢ Framework Code: AlignDrv, implementing IAlignDrv + OnlineService ➢ User Code: implementing IAlignIterator interaface (basically one routine) and calling the methods of IAlignDrv + doing the real work ❏ Analyser: ➢ If a “standard” event processing (e.g. Brunel-like) application is run, there is no additional coding need, besides the reading of the parameter set in the start transition. The pause transition is called by the online file selector automatically at EoF. 7 6th LHCb Computing Workshop, 18 November 2015

Beat Jost, Cern Status ❏ The alignment procedures are all implemented ➢ VeLo, Tracker and Muon alignment are executed automatically at each fill (when sufficient statistics is available) ➥ Velo within a few minutes after closing –Triggers a run change if deviations are sufficiently big. ➥ Tracker takes a bit longer to accumulate the necessary statistics ➥ Muon takes very long (several fills) to accumulate the necessary statistics ➢ Calo (  0 )calibration is run manually very rarely (few times/year) ➢ RICH mirror alignment run ‘manually’ when needed ❏ Bandwidth-Division also uses the same framework ➢ Run when no other activities are executed ➥ Takes a lot of CPU power ➥ Spawns 20+ tasks… 8 6th LHCb Computing Workshop, 18 November 2015

Beat Jost, Cern Alignment in the upgrade ❏ Assuming the farm nodes still have local disks (very likely, as we prob. still have a split-HLT) ➢ There is not really a reason to deviate from the current scheme ➢ Unless… Requirements (e.g. frequency of alignment) change drastically… ❏ A (somewhat provocative) alternative scheme: Continuous, distributed alignment and calibration ➢ just a fresh idea, surely not yet completely fermented… 9 6th LHCb Computing Workshop, 18 November 2015

Beat Jost, Cern Continuous, distributed alignment and calibration ❏ Basic idea ➢ Each HLT1 trigger task aligns/calibrates continuously while it is processing the event data ➥ No central entity (Iterator) that has to collect the results of the event processing (embedded in the HLT(1) process) ➢ Pros: ➥ No central Iterator ➥ No out-of-process information transfer (xml files) ➥ Re-use information/computational results already present (reconstruction already partially performed during HLT processing) ➥ Not relying in local disk ➥ Seamless update of alignment constants ➢ Cons: ➥ Possible slow-down of HLT processing –Prob. marginal (depending on implementation) ➥ Each process has its own alignment constants –Should be the same (statistically) as events are independent ➥ Need to devise a system to convey the actual alignment used to offline software (just in case…) 10 6th LHCb Computing Workshop, 18 November 2015