CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 1 LHCb Readout System and Real-Time Event Management - Emphasis.

Slides:



Advertisements
Similar presentations
Clara Gaspar on behalf of the LHCb Collaboration, “Physics at the LHC and Beyond”, Quy Nhon, Vietnam, August 2014 Challenges and lessons learnt LHCb Operations.
Advertisements

LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Olivier Callot LHCb commissioning What had to be done Technical challenges Before first beam: Cosmics First beams TED shots Circulating beam What will.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The LHCb Event-Builder Markus Frank, Jean-Christophe Garnier, Clara Gaspar, Richard Jacobson, Beat Jost, Guoming Liu, Niko Neufeld, CERN/PH 17 th Real-Time.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
J. Leonard, U. Wisconsin 1 Commissioning the Trigger of the CMS Experiment at the CERN Large Hadron Collider Jessica L. Leonard Real-Time Conference Lisbon,
Laboratoire de l’Accélérateur Linéaire, Orsay, France and CERN Olivier Callot on behalf of the LHCb collaboration Implementation and Performance of the.
Patrick Robbe, LAL Orsay, for the LHCb Collaboration, 16 December 2014
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
LHCb DAQ Review, September LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat.
The CMS Level-1 Trigger System Dave Newbold, University of Bristol On behalf of the CMS collaboration.
The PEPPo e - & e + polarization measurements E. Fanchini On behalf of the PEPPo collaboration POSIPOL 2012 Zeuthen 4-6 September E. Fanchini -Posipol.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Status of Data Exchange Implementation in ALICE David Evans LEADE 26 th March 2007.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Status of the Beam Phase and Intensity Monitor for LHCb Richard Jacobsson Zbigniew Guzik Federico Alessio TFC Team: Motivation Aims Overview of the board.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
TELL1 The DAQ interface board for LHCb experiment Gong guanghua, Gong hui, Hou lei DEP, Tsinghua Univ. Guido Haefeli EPFL, Lausanne Real Time ,
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Federico Alessio, CERN Richard Jacobsson, CERN A new Readout Control System for the LHCb Upgrade at CERN 18th IEEE-NPSS Real Time Conference, June.
Federico Alessio, CERN Zbigniew Guzik, IPJ, Swierk, Poland Richard Jacobsson, CERN A 40 MHz Trigger-free Readout Architecture for the LHCb experiment 16th.
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
PhD Student, Federico Alessio Directeur de thèse, Renaud Le Gac Superviseur de thèse, Richard Jacobsson, CERN Beam and Background Monitoring and the Upgrade.
CHIPP meeting Appenberg, 24 Aug 2009 Preparation for LHC beam, Jeroen van Tilburg 1/15 Jeroen van Tilburg (Universität Zürich) LHCb: Preparation for LHC.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
CERN Richard Jacobsson, CERN LEADE meeting, March 29, Physics trigger RS TFC SwitchThrottle OR/Switch VELO FEST FEOT FE Clock Orbit Clock Orbit.
18/05/2000Richard Jacobsson1 - Readout Supervisor - Outline Readout Supervisor role and design philosophy Trigger distribution Throttling and buffer control.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
DAQ Systems and Technologies for Flavor Physics FPCP Conference Lake Placid 1 June 2009 Beat Jost/ Cern-PH.
NBI2006 Starting OPERA data-taking with the CNGS beam D.Autiero IN2P3/IPN Lyon 5/9/2006.
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
DAQ Overview + selected Topics Beat Jost Cern EP.
CERN R. Jacobsson Between LHC and the Grid - Aspects of Operating the LHC Experiments – T. Camporesi, C.Clement, C. Garabatos Cuadrado, L. Malgeri, T.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
LHCb trigger: algorithms and performance Hugo Ruiz Institut de Ciències del Cosmos March 12 th -17 th 2009, Tsukuba, Japan On behalf of the LHCb Collaboration.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
1 J. Varela, CMS Trigger, RT09, Beijing, May 2009 J. Varela IST/LIP Lisbon CMS Trigger Project Manager 16 th IEEE NPSS Real Time Conference May 10-15,
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
The LHCb Online Framework for Global Operational Control and Experiment Protection F. Alessio, R. Jacobsson, CERN, Switzerland S. Schleich, TU Dortmund,
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
First collisions in LHC
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
MPD Data Acquisition System: Architecture and Solutions
Electronics Trigger and DAQ CERN meeting summary.
TELL1 A common data acquisition board for LHCb
Electronics, Trigger and DAQ for SuperB
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
DAQ Systems and Technologies for Flavor Physics
The LHCb Event Building Strategy
Example of DAQ Trigger issues for the SoLID experiment
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
The LHCb Trigger Niko Neufeld CERN, PH.
LHCb Trigger, Online and related Electronics
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
LHCb Trigger LHCb Trigger Outlook:
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Presentation transcript:

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, LHCb Readout System and Real-Time Event Management - Emphasis on the architectural and functional aspects not the particular technologies - Commissioning and status “An Integrated Control System for the LHCb Experiment”, CMS1-1, Thursday “Handling Online Information in the LHCb Experiment”, CMSP-8, Tuesday Federico Alessio, CERN, in place of Richard Jacobsson, CERN, on behalf of the LHCb Online Team

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Place in the World 2 Underground cavern at ~100m depth LHC in 50 institutes in 16 countries 700 people ~10m ~20m

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009  Challenge of an hadronic precision experiment  Particle multiplicity  Very large background  Small ratio of interesting B-meson decays O (10 −3 –10 −9 )  High statistics  Design luminosity L ~2x10 32 cm -2 s -1 (1/50 Atlas&CMS)  10 MHz visible interactions  100 kHz bb-event rate  ~ bb-pair / year at LHCb  2 kHz event storage rate LHCb – Single-arm forward spectrometer 3 p-beam Efficient trigger for many B decay topologies Muon system, ECAL+Preshower, HCAL, Vertex Locator Efficient particle identification RICH Good decay time resolution Vertex Locator 5 mm from beam Good mass resolution Tracker and Magnet B 1 cm

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Trigger Architecture 4 High-Level Trigger 2 kHz Level -0 L0 e,  40 MHz 1 MHz L0 had L0  ECAL Alley Had. Alley Global reconstruction 30 kHz HLT1 HLT2 Muon Alley Inclusive selections ,  +track,  Exclusive selections Storage: Event size ~35kB  Level-0 Hardware Trigger 40 MHz  1 MHz  Search for high-pT , e, , hadron candidates  Latency 4  s, i.e. pipelining 160 events  High Level Trigger Farm with O (1000) quadcores  HLT1: Confirm L0 candidate with more complete info, and add impact parameter and lifetime cuts  HLT2: global event reconstruction + selections  Processing time available O (milliseconds)  Output rate 2 kHz  HLT needs to know how L0 is configured  How to distribute to 1000 nodes simultaneously in seconds when optimizing parameters during LHC fill?

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009  Evolution: Eliminating trigger levels Readout Architecture RT2003 5

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, LHCb Readout System RT2009 SWITCH High-Level Trigger farm Detector Timing & Fast Control SWITCH READOUT NETWORK LHC clock Event Requests Event building Front-End CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU CPUCPU Readout Board VELO STOT RICH ECalHCalMuon SWITCH Mon. farm CPUCPU CPUCPU CPUCPU CPUCPU Readout Board FE Electronics L0 trigger L0 Trigger 320 ROBs 24 Gb/s 4 Gb/s 50 TB with 70 MB/s 3000 GbE ports 35 GB/s 50 subfarms of ~40 nodes Shielding wall 5000 optical/analog links O (4 Tb/s) Offline See “Controlling a Large Trigger Farm Using Industrial Tools”, OPF2, Wednesday and “Management of the LHCb Readout Network”, OPF-3, Wednesday 11.00

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009  Design principle: A limited number of protocols and technologies  Readout network: GbE IPv4  Farm-to-storage: GbE TCP/IP  Overhead reduction  ~ 120 Bytes  Overhead on IP/Eth is 58 Bytes   Pack fragments of several consecutive events = MultiEvent Packet (MEP) Protocol Data Transfer Protocols 7

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Calib  Readout control has two aspects:  Control of data transfer MEP Packing Destination assignment for event building and HLT Load balancing Partitioning for parallel activities  Management of event types and associated destinations/processing Physics triggers Calibration triggers Luminosity triggers Non-zero suppressed data Luminosity scans (Vernier scan)  Driven and managed by the LHCb Timing and Fast Control System  Responsible for distributing timing, trigger and synchronous and asynchronous information to entire readout system  FPGA based master: Readout Supervisor Also performs rate control and generates all types of auto-triggers and calibration sequences Centralized Readout Control 8

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 TFC Master Info Flow 9 Readout Supervisor LHC accelerator Beam Phase and Intensity Monitor Subdetectors HLT farm L0 trigger RS event bank Multi Event Requests Bunch currents Clock/orbit, UTC, LHC Info HW and run parameters Run statistics Detector status L0 Decision RO Electronics Trigger Throttle TFC FE Electronics TFC

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Data Transfer Control Readout network: Push protocol with passive pull mechanism  MEP Packing Control by Readout Supervisor: 1. Trigger Type transmitted synchronously to all Readout Boards  Determines processing in ROB and synch check 2. IP Destination transmitted synchronously to all Readout Boards for each MEP  Reception triggers closure and sending of last MEP  Interleaving Trigger Type and Destination determines MEP packing  Dynamic packing factor depending on event types  Trigger type determines destination type (HLT, calibration, etc)  Farm Destination for next MEP chosen based on Credit scheme 1.Farm nodes transmits Event Requests with a Credit(“declare as ready to receive”) 2.Readout Supervisor round-robin in Destination Table 3.Select destinations with positive Credit and decrement  Effectively, load balancing of readout network and HLT Farm  Static load balancing of network by organization of Destination Tables in Readout Supervisor  Dynamic load balancing of HLT farm nodes  Nodes request events when ready  In all cases, event loss is minimized in case of failing/blocked/slow links or nodes  Ultimately, credit scheme regulate readout rate when low on credits Calib 10

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Event Management  Readout Supervisor Data Bank appended to each event  Book-keeping  Run Number regroups events taken with same configuration  UTC Time to correlate event with Conditions DB  Coarse quality bits from each sub-detector which may be used by HLT  Trigger Type and Calibration Type determine type of processing  Window of consecutive 25ns clock cycles which should be processed together (see later)  HLT needs to know how L0 is configured  How to distribute to 1000 nodes simultaneously?  Trigger Configuration Key distributed in RS data bank allows optimizing the trigger parameters in real-time during LHC fill 11

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009  Detector Performance, Readout Performance and Data Quality  Histogram collected from all systems  Monitoring Farm spying on event streams at best effort Also produces histograms from an online reconstruction at best effort  Histogram analysis  Automatic checks and alarms  Histogram inspected by Data Manager Shifter Online Monitoring 12 Calib Histogram Handling (ECS) Automatic Histogram Analysis Interactive Presenter

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 LHCb Data and Process Flow 13 Online 35 = 70 MB/s or 2 GB file(~60 kevts) / 30s Offline Tier-0 / 1 Tier-2 Storage (CASTOR) Reconstruction Simulation Stripping Analysis HLT Bulk Stream Express Stream 5 Hz Storage (CASTOR) Reconstruction Calibration Alignment QC Offline Control Room 20h/file, 20 kB/evt Bookkeping Data and Production Management Data Quality Checking Test Jobs QC Run Info Data Quality

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009  Real-Time scheme to validate High Level Trigger, data flow and offline processing =  Be ready to receive, process and analyze 7 million events in the first hour of collisions No-beam System Tests 14 MEP Requests Injector Simulated events  Replacing detector with injection of 10 8 “accepted” simulated events real-time in Online system at HLT rate (2 kHz) See poster “High-speed Data Injection for Data Flow Verification in LHCb, CMSP-28, Tuesday 16.40

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Commissioning LHCb 15 Two years of intense work 2006 – 2008 with the aim to:  Operate the detector AND people as a unit with common tools  Bring all components (sub-detectors and service systems) to operational state.  Define, implement and validate the tools and procedures needed to run the detector as a whole  Organise the activities to reach the ready state in time  Understand and calibrate the detector  Test pulses, radioactive sources  Cosmics  LHC injection tests  First days with beam  Operate with two shifters  Operating the whole detector from one console  Understandable high-level tools for diagnostics, alarms and data monitoring  Homogeneity in the system  Shifter training  On-call Experts for all sub-systems and sub-detectors  Reach operational efficiency  Starting (<10min) and restarting (<1 min) rapidly and smoothly Crucial tool: Readout and processing of sets of consecutive 25ns clock cycles around “detector activity” trigger Time and space alignment Leakage in preceding and subsequent clock cycles Optimize signal over spill-over

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Commissioning with Cosmics 16  Challenge: LHCb geometry is NOT well suited for cosmics…  “Horizontal” cosmics well below a Hz  Still 1.6x10 6 good events (July – September 2008 ) recorded for the large sub-detectors

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 First Glimpse of LHC Protons 17 Sector Tests Aug-Sep 2008  Beam 2 dumped on injection line beam stopper (TED) TED TI8 LHC Vertex Locator Scintillator Pad Detector Muon

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Experience with 150 Joules 18  RICH2 “photon blast” LHC Turn-On - September 10, 2008  An all too short honeymoon with LHC…  Contrary to what we wish for the future, the splashes were Highly Desired Events!

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009 Readout Architecture RT See talk “A 40 MHz Trigger-free Readout Architecture for LHCb”, RTSA2, Tuesday  Upgrade plans already well underway: Another level of Natural Selection

CERN R. Jacobsson CERN 16 th IEEE NPSS Real Time Conference, Beijing, China, May 10–15, 2009  LHCb has become an operational experiment “waiting” for beam  Readout system mature with advanced readout control and event management  Reached a good compromise between use of COTS and custom electronics  Commissioning  Still (too…) many experts in the control room…necessary ones or not…  Injection tests end of August 2008 gave the first ever LHC-induced tracks  Beam-collimator shots were obviously the high-light of 2008, “unfortunately”…  LHCb very ready for the long run with LHC COLLISIONS  Upgrade path is towards full 40 MHz trigger-free readout  Becoming popular concept, cmp future acelerators  Good topic for next years Real Time Conference Conclusions 20