Calibration streams in the Event Filter. Status report Mainz, Thursday 13 October 2005 Sander Klous – NIKHEF On behalf of the EF calibration team: Martine.

Slides:



Advertisements
Similar presentations
ATLAS ATLAS PESA Meeting 25/04/02 B-Trigger Working Group Work-plan This talk:
Advertisements

G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
GWDAW 16/12/2004 Inspiral analysis of the Virgo commissioning run 4 Leone B. Bosi VIRGO coalescing binaries group on behalf of the VIRGO collaboration.
TRT LAr Tilecal MDT-RPC BOS Pixels&SCT 1 The Atlas combined testbeam Thijs Cornelissen, NIKHEF Jamboree, Nijmegen, December 2004.
B-tagging, leptons and missing energy in ATLAS after first data Ivo van Vulpen (Nikhef) on behalf of the ATLAS collaboration.
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
The ATLAS B physics trigger
28 August 2002Paul Dauncey1 Readout electronics for the CALICE ECAL and tile HCAL Paul Dauncey Imperial College, University of London, UK For the CALICE-UK.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger David W. Miller on behalf of the ATLAS Collaboration 27 May th Real-Time.
General Trigger Philosophy The definition of ROI’s is what allows, by transferring a moderate amount of information, to concentrate on improvements in.
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
1 Raw Event : ByteStream implementation Muon Workshop, April 2002 A. Nisati, for the Muon Trigger group.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Worldwide event filter processing for calibration Calorimeter Calibration Workshop Sander Klous September 2006.
Algorithm / Data-flow Interface
1 Modelling parameters Jos Vermeulen, 2 June 1999.
On the same page with Streaming Adam Lyon Analysis Tools - 6/04/2002.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Level 3 Muon Software Paul Balm Muon Vertical Review May 22, 2000.
The Region of Interest Strategy for the ATLAS Second Level Trigger
Remote Online Farms Sander Klous
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Trigger Requirements first thoughts from Alessandro Di Mattia, Giovanni Siragusa, Sergio Grancagnolo, Andrea Ventura, Michela Biglietti, Diana Scannicchio.
HEP 2005 WorkShop, Thessaloniki April, 21 st – 24 th 2005 Efstathios (Stathis) Stefanidis Studies on the High.
ATLAS ATLAS Week: 25/Feb to 1/Mar 2002 B-Physics Trigger Working Group Status Report
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
EGEE-III INFSO-RI Enabling Grids for E-sciencE Overview of STEP09 monitoring issues Julia Andreeva, IT/GS STEP09 Postmortem.
Valeria Perez Reale University of Bern On behalf of the ATLAS Physics and Event Selection Architecture Group 1 ATLAS Physics Workshop Athens, May
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Online-Offsite Connectivity Experiments Catalin Meirosu *, Richard Hughes-Jones ** * CERN and Politehnica University of Bucuresti ** University of Manchester.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
HLT DT Calibration (on Data Challenge Dedicated Stream) G. Cerminara N. Amapane M. Giunta CMS Muon Meeting.
SL1Calo Input Signal-Handling Requirements Joint Calorimeter – L1 Trigger Workshop November 2008 Norman Gee.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
AOD/ESD plans Status and plans focusing on LVL2 e/  and some items for discussion On behalf of: J.Baines, P.Casado, G.Comune, A.DiMattia, S.George, R.Goncalo,
Routing and Streaming in the HLT TDAQ Week Sander Klous Wednesday, May 17, 2006.
M. Gilchriese Basic Trigger Rates December 3, 2004.
Overview of the High-Level Trigger Electron and Photon Selection for the ATLAS Experiment at the LHC Ricardo Gonçalo, Royal Holloway University of London.
4/9/2007RPC converter1/18 RPC bytestream converter: Brainstorming a summary of discussions involving M.Bianco, G.Cataldi, G.Chiodini, E.Gorini, A.Guida,
STAR J/  Trigger in dA Manuel Calderon for the Heavy-Flavor Group Trigger Workshop at BNL October 21, 2002.
S t a t u s a n d u pd a t e s Gabriella Cataldi (INFN Lecce) & the group Moore … in the H8 test-beam … in the HLT(Pesa environment) … work in progress.
ATLAS TDAQ RoI Builder and the Level 2 Supervisor system R. E. Blair, J. Dawson, G. Drake, W. Haberichter, J. Schlereth, M. Abolins, Y. Ermoline, B. G.
1 OO Muon Reconstruction in ATLAS Michela Biglietti Univ. of Naples INFN/Naples Atlas offline software MuonSpectrometer reconstruction (Moore) Atlas combined.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Régis Lefèvre (LPC Clermont-Ferrand - France)ATLAS Physics Workshop - Lund - September 2001 In situ jet energy calibration General considerations The different.
ID Week 13 th of October 2014 Per Johansson Sheffield University.
Ivo van Vulpen Summary ATLAS Trigger and Physics week November 2006
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
4 Dec., 2001 Software Week Data flow in the LArG Reconstruction software chain Updated status for various reconstruction algorithm LAr Converters and miscellaneous.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
Concluding discussion on Commissioning, Etmiss and DPD
OO Muon Reconstruction in ATLAS
Special edition: Farewell for Valerie Halyo
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
Remote Online Farms TDAQ Sander Klous ACAT April
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
Detector parameters and modelling
J. Rutherfoord & P. Schacht 17 May 2004
M.Biglietti (Univ. Naples and INFN Naples)
Presentation transcript:

Calibration streams in the Event Filter. Status report Mainz, Thursday 13 October 2005 Sander Klous – NIKHEF On behalf of the EF calibration team: Martine Bosman, Andrea Negri, Serge Sushkov and Sarah Wheeler.

13 October 2005TDAQ workshop - Mainz2 Physics streams 40 MHz x 1.5 MByte Level 1 Level 2 75 kHz x 1.5 MByte Definition and scope. Calibration issues in the EF. –At the moment only EF output. Identify calibration types. –Size, rate, contents. Requirements for the EF. –Data flow. –Processing issues. Implementation scenarios. Design and modifications. –Processing. –Memory management. –Networking/Timing. Plan of work. Processing time: 1 Sec/Evt Number of nodes: 1500 Output:200 Hz 320 MB/s Calibration streams Level 1 Level 2 Other people Event Filter ? 2 kHz x 1.5 MByte

13 October 2005TDAQ workshop - Mainz3 Use cases (identification of calibration streams). Based on the Hawkings/Gianotti document. –All listed calibration types are identified at HLT level (after PESA). Known calibration types in the HLT. –Various duplicates of physics streams: e.g. inclusive high p T electrons and muons (tracking), Z to di-lepton (energy), minimum bias (background). Total: 35 MB/s (10% of physics data). –Liquid Argon Calorimeter. Pulse shape analysis, timing calibration and tuning of filter coefficients. High p T electron sample Electro Magnetic data only. ROI only. Raw data, 5 consecutive samples (i.e. special event type). –Calorimeters and TRT. Hadronic response studies, comparison to test beam data. TRT: e/  separation. High p T isolated hadrons. All subdetectors. ROI only. RAW data.

13 October 2005TDAQ workshop - Mainz4 Use cases – continued… (identification of calibration streams). –MDT small chambers. Hourly realignment. Small muon sample. MDT information only. Overlap regions only. Reprocessing of raw data. –Inner Detector subdetectors (Pixel, SCT and TRT). ROD monitoring (TRT only) and alignment. Generic high p T events. All subdetectors. ROI only. Post-processing of track fit information on HLT level. Other foreseen calibration types. –Liquid Argon Calorimeter might need Z  ee calibration at HLT level. –High statistics (1 kHz) ROI muon sample, containing MDT, CSC and RPC/TGC information. –Your favorite missing calibration stream…

Special events EF Node n Transport time: 19 ms EF Node n Data flow characterization. SFO 320 MB/s Physics Full calibration events SFO 32 MB/s High p T SFO 1.6 MB/s Z di-lepton SFO 1.6 MB/s Min. bias EF processing: 1 second/event SFO 5 kB/s OverlapMu 2.5 MB/s LAr SFO 2 MB/s IsoHad SFO 4 MB/s GenPT Stripping/Collecting Partial calibration events Transport times/rates: LAr OverlapMu IsoHad GenPT 0.5 ms 0.05 ms 5 ms 0.5 ms 50 Hz 5 Hz 100 Hz Output 1 MB/s LVL2Cal Subdetector Fragment 1 ROI info Transport time: LVL2Cal 0.01 ms Partial event SFI? Transport times: 19 ms Subdetector Fragment 1 Subdetector Fragment N Lvl 1/2 info SFI Additional Processing 1 kHz! Sorting EF processing: 1 second/event

13 October 2005TDAQ workshop - Mainz6 EF processing issues. Definitions: –Sorting for calibration: CalID. –Stripping/Collecting for calibration: CalCollect. –Detector calibration: CalDetect or Calibration. Full event streams. –Very similar to physics streams. –Output to multiple SFOs/streams. –Sorting of events for calibration (CalID). Partial event streams. –Processing similar to physics stream. –Stripping and collecting (CalCollect). –Handling of different output event size. –Sometimes requires additional processing (CalDetect/Calibration). Special event streams. –Processing times completely different from physics stream. –Handling of different input and output event sizes. Central issue: Robustness of the EFD Main output stream SFO Diagnostic SFO Node n EFD PT #1 PTIOPTIO SFI Input ExtPTs Output Trash ExtPTs PT cal PTIOPTIO Calibration Stream Output SFO PT #2 PTIOPTIO Event Result SharedHeap

13 October 2005TDAQ workshop - Mainz7 Main output stream SFO EF processing issues. Node n EFD SFI Input ExtPTs Output Event SharedHeap Dataflow application Definitions: –Sorting for calibration: CalID. –Stripping/Collecting for calibration: CalCollect. –Detector calibration: CalDetect or Calibration. Full event streams. –Very similar to physics streams. –Output to multiple SFOs/streams. –Sorting of events for calibration (CalID). Partial event streams. –Processing similar to physics stream. –Stripping and collecting (CalCollect). –Handling of different output event size. –Sometimes requires additional processing (CalDetect/Calibration). Special event streams. –Processing times completely different from physics stream. –Handling of different input and output event sizes. Central issue: Robustness of the EFD

13 October 2005TDAQ workshop - Mainz8 Calibration stream scenarios (1). Additional functionality: CalID algorithm. Parallel output streams. Node n EFD SFI Input ExtPTs Trash Main output stream SFO Output Event Result Physics only events Full calibration events. e.g. Z di-lepton PT #1 PTIOPTIO PESA Main output stream SFO Node n EFD SFI Input ExtPTs Output Trash Event Result PT #1 PESA PTIOPTIO Calibration Stream SFO Output CalID

Calibration Stream 1 Main output stream SFO Node n EFD SFI Input ExtPTs Output Event Result PT #1 PESA CalID PTIOPTIO Calibration stream scenarios (2). Additional functionality: PT for calibration. –Information handling –Stripping/collecting. Memory management. Partial calibration events. e.g. GenPT Node n EFD SFI? Input ExtPTs Event Calibration Stream EF output Output PT #1 CalID PTIOPTIO Calibration Stream 2 Output SFO ExtPTs PT cal PTIOPTIO Stripping Collecting CalResult Special streams. e.g. LVL2Cal Calibration Networking/Timing issues. Sorting

13 October 2005TDAQ workshop - Mainz10 Design and modifications (1). CalID algorithm. –Lightweight algorithm. –Runs after PESA in physics PT – Stability issues. Athena configuration: multiple top algorithms. –Workload: Low – implementation only, thorough testing required. –Impact: High – required for (almost) all calibration streams. –Coordination: Sorting – New PT answers should be discussed. Parallel output streams. –Slight modification of existing algorithm. –Runs in EFD, probably required for PESA as well. –Workload: Low – Modification of standard EFD task. –Impact: High – Required for most (calibration) streams.

13 October 2005TDAQ workshop - Mainz11 Design and modifications (2). PT for calibration. 1.Create stripping/collection algorithm. Requires new eformat (see next slide). Requires modifications in output task. 2.Allow multiple PTs to run consecutively (works already). 3.Transfer information between these PTs. –Should be possible with new eformat for EFResult / CalResult. 4.It might be interesting to transfer “intermediate results”. –Would avoid to Run calibration algorithms in the same PT as PESA. Reanalyze complete event in second PT. –Since EF is a dataflow application, this should be accomplished by writing an extra “Intermediate Result” object in the Shared Heap. This requires ByteStream conversion for complex classes. –Workload: Medium – With exception of item 4 (no use cases yet). –Impact: High – Maybe with exception of item 4.

13 October 2005TDAQ workshop - Mainz12 Memory management and information handling. New eformat. Node n EFD SFI Input ExtPTs SharedHeap Event EFResult PT #1 PESA CalID PTIOPTIO CalResult SFO Output PT cal PTIOPTIO Stripping Collecting x x Integrate with: – Virtual event. – SharedHeap. – Event handling in the EF. – Event modification in EF, i.e. stripping/collecting. Event fragments EFResult Stripping CalResult Virtual event

13 October 2005TDAQ workshop - Mainz13 Open issues Many new developments on a very tight schedule. Memory and performance. –Management: move from open/close backpressure mechanism (barrier) to analog (Nano sleeps). –Timing: revise SFI – EFD – SFO protocol. Coordination… –Investigate common/similar design issues with monitoring –128 bit header word. First discussion yesterday. 32 bits to register appropriate output streams. Investigate usage of this header. Streaming only, since EFResult fragment contains much more detail and is “just around the corner”. –PT answer to EFD. Composite structure. –EFD – SFO sorting, i.e. how is an output stream defined? Load balancing between calibration and physics. Distribution of calibration constants to EF software. –Communication between Athena algorithms and configuration and calibration databases. –Could this be done via e.g. the information service?

13 October 2005TDAQ workshop - Mainz14 Plan of work. Short time scale. –Run calibration algorithm in PT. –Implement parallel output streams. Medium time scale. –Implement CalID algorithm and sorting. –Eliminate dead time in input and output tasks. Medium – Long timescale. –Change memory management. –Implement new eformat. –Implement stripping/collecting. Long timescale. –Transfer of intermediate results between first and second PT

13 October 2005TDAQ workshop - Mainz15 Conclusions. Our understanding of calibration streams in the Event Filter improved a lot. We think we have a realistic overview of the workload involved in modifications of the Event Filter. Implementation has started and this will lead to even better understanding of the topic (and of the work involved). There are still some (many) open issues. Coordination is important, especially because of the tight schedule. More information:

13 October 2005TDAQ workshop - Mainz16 Conclusions. Our understanding of calibration streams in the Event Filter improved a lot. We think we have a realistic overview of the workload involved in modifications of the Event Filter. Implementation has started and this will lead to even better understanding of the topic (and of the work involved). There are still some (many) open issues. Coordination is important, especially because of the tight schedule. More information:

13 October 2005TDAQ workshop - Mainz17 Appendix

13 October 2005TDAQ workshop - Mainz18 A distributed trigger for calibration? Appears to fit with solutions shown in LVL2Mu presentations. Ultralight project (Manuela Cirilli). LVL2Mu calibration stream (Speranza Falciano). Etc. (Enrico Pasqualucci, Alessandro de Salvo). Additional functionality: HLT output Node n EFD SFI? Input ExtPTs Event PT #1 Event Distributor PTIOPTIO Output Moore’s law for networking Gary Stix, Scientific American, January 2001 The Event Filter is CPU dominated. You would like it to be bandwidth dominated… Calibration Stream HLT output

13 October 2005TDAQ workshop - Mainz19 ByteStream conversion Write converters? –No, a lot of work. –Robustness issues. Generic ByteStream conversion? –No support for complicated classes (e.g. multiple inheritance, polymorphism). Something else? Not on the priority list. Under discussion…

13 October 2005TDAQ workshop - Mainz20 Memory management and networking/timing Node n EFD SFI Input ExtPTs PT cal PTIOPTIO SFO SharedHeap Event EFResult Event EFResult Event EFResult CalResult Intermediate Results PT #1 PESA CalID PTIOPTIO Virtual event Output Stripping Collecting 19 milliseconds transport time 0.25 ms dead time 1+ second processing time Barrier

13 October 2005TDAQ workshop - Mainz21 Memory management and networking/timing Node n EFD SFI Input ExtPTs PT cal PTIOPTIO SFO SharedHeap Event EFResult CalResult Event EFResult CalResult Event EFResult CalResult Intermediate Results Intermediate Results Intermediate Results PT #1 PESA CalID PTIOPTIO Virtual event Output Barrier 0.01 ms transport time 25 ms dead time 0 sec. processing time Eliminate dead time. –Redesign of SFI – EFD – SFO communication protocol. –Coordination with networking group. Barrier is insufficient. –Oscillations. –Other memory requests. Stripping Collecting

13 October 2005TDAQ workshop - Mainz22 Memory management and networking/timing Node n EFD SFI Input ExtPTs PT cal PTIOPTIO SFO SharedHeap Event EFResult CalResult Event EFResult CalResult Event EFResult CalResult Intermediate Results Intermediate Results Intermediate Results PT #1 PESA CalID PTIOPTIO Virtual event Output Barrier 0.01 ms transport time 25 ms dead time 0 sec. processing time Eliminate dead time. Barrier is insufficient. –Oscillations. –Other memory requests. Stripping Collecting

13 October 2005TDAQ workshop - Mainz23 Memory management and networking/timing Node n EFD SFI Input ExtPTs PT cal PTIOPTIO SFO SharedHeap Event EFResult CalResult Event EFResult CalResult Event EFResult CalResult Intermediate Results Intermediate Results Intermediate Results PT #1 PESA CalID PTIOPTIO Virtual event Output Eliminate dead time. Barrier is insufficient. –Oscillations. –Other memory requests. Nano sleeps 0.01 ms transport time 25 ms dead time Nano sleeps Stripping Collecting 0 sec. processing time Solution: Nano sleeps, However… – multiple control loops. – Risk of oscillations. – Additional complexity. Workload: High – Especially testing. Impact: ???