LHCb front-end electronics and its interface to the DAQ.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
28 August 2002Paul Dauncey1 Readout electronics for the CALICE ECAL and tile HCAL Paul Dauncey Imperial College, University of London, UK For the CALICE-UK.
1 Pulsar firmware status March 12th, 2004 Overall firmware status Pulsar Slink formatter Slink merger Muon Reces SVT L2toTS Transmitters How to keep firmware.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
FF-LYNX R. Castaldi, G. Magazzù, P. G. Verdini INFN – Sezione di Pisa G. Bianchi, L. Fanucci, S. Saponara, C. Tongiani Dipartimento di Ingegneria della.
LHCb DAQ Review, September LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat.
DDL hardware, DATE training1 Detector Data Link (DDL) DDL hardware Csaba SOOS.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
Claudia-Elisabeth Wulz Institute for High Energy Physics Vienna Level-1 Trigger Menu Working Group CERN, 9 November 2000 Global Trigger Overview.
M. Lo Vetere 1,2, S. Minutoli 1, E. Robutti 1 1 I.N.F.N Genova, via Dodecaneso, GENOVA (Italy); 2 University of GENOVA (Italy) The TOTEM T1.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
TELL1 The DAQ interface board for LHCb experiment Gong guanghua, Gong hui, Hou lei DEP, Tsinghua Univ. Guido Haefeli EPFL, Lausanne Real Time ,
Muon Electronics Upgrade Present architecture Remarks Present scenario Alternative scenario 1 The Muon Group.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Instrumentation DepartmentCCLRC Rutherford Appleton Laboratory28 March 2003 FED Project Plan 2003 FED Project aiming to satisfy 2 demands/timescales: Module.
Federico Alessio Zbigniew Guzik Richard Jacobsson TFC Team: A Super-TFC for a Super-LHCb - Top-down approach -
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
26/11/02CROP meeting-Nicolas Dumont Dayot 1 CROP (Crate Read Out Processor)  Specifications.  Topology.  Error detection-correction.  Treatment (ECAL/HCAL.
Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Vienna Group Discussion Meeting on Luminosity CERN, 9 May 2006 Presented by Claudia-Elisabeth Wulz Luminosity.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
18/05/2000Richard Jacobsson1 - Readout Supervisor - Outline Readout Supervisor role and design philosophy Trigger distribution Throttling and buffer control.
Testing, timing alignment, calibration and monitoring features in the LHCb front-end electronics and DAQ interface Jorgen Christiansen - CERN LHCb electronics.
ATLAS SCT/Pixel TIM FDR/PRR28 June 2004 TIM Requirements - John Lane1 ATLAS SCT/Pixel TIM FDR/PRR 28 June 2004 Physics & Astronomy HEP Electronics John.
DAQ Overview + selected Topics Beat Jost Cern EP.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
1 Timing of the calorimeter monitoring signals 1.Introduction 2.LED trigger signal timing * propagation delay of the broadcast calibration command * calibration.
SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
FPGA based signal processing for the LHCb Vertex detector and Silicon Tracker Guido Haefeli EPFL, Lausanne Vertex 2005 November 7-11, 2005 Chuzenji Lake,
The Data Handling Hybrid Igor Konorov TUM Physics Department E18.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
LHCb Outer Tracker Upgrade Actel FPGA based Architecture 117 januari 2013 Outline ◦ Front end box Architecture ◦ Actel TDC ◦ Data GBT interface ◦ Data.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
K + → p + nn The NA62 liquid krypton electromagnetic calorimeter Level 0 trigger V. Bonaiuto (a), A. Fucci (b), G. Paoluzzi (b), A. Salamon (b), G. Salina.
Some thoughs about trigger/DAQ … Dominique Breton (C.Beigbeder, G.Dubois-Felsmann, S.Luitz) SuperB meeting – La Biodola – June 2008.
Electronics Trigger and DAQ CERN meeting summary.
The Data Handling Hybrid
TELL1 A common data acquisition board for LHCb
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Vertex 2005 November 7-11, 2005 Chuzenji Lake, Nikko, Japan
TTC system and test synchronization
LHCb and its electronics
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
Dominique Breton, Jihane Maalmi
Example of DAQ Trigger issues for the SoLID experiment
LHCb Trigger and Data Acquisition System Requirements and Concepts
John Harvey CERN EP/LBC July 24, 2001
Event Building With Smart NICs
LHCb Electronics Brainstorm
LHCb Trigger, Online and related Electronics
Testing, timing alignment, calibration and monitoring features in the LHCb front-end electronics and DAQ interface Jorgen Christiansen - CERN LHCb electronics.
Network Processors for a 1 MHz Trigger-DAQ System
Throttling: Infrastructure, Dead Time, Monitoring
New DCM, FEMDCM DCM jobs DCM upgrade path
The LHCb Front-end Electronics System Status and Future Development
Data Concentrator Card and Test System for the CMS ECAL Readout
TELL1 A common data acquisition board for LHCb
Presentation transcript:

LHCb front-end electronics and its interface to the DAQ

DAQ review Sep. 2001J.Christiansen/CERN2 Quick LHCb Front-end overview ~ 1 million detector channels. 10 different sub-detector front-end implementations. 40 MHz bunch crossing rate. –~1/3 has interaction Two trigger levels in front-end. –L0:4.0 us constant latency (pipeline buffer) Max 1.11 MHz accept rate –L1:Variable latency, max 1900 event (event FIFO) Trigger decisions distributed in chronological order 40 (100) KHz accept rate. Front-end architecture: –Simple front-end architecture where possible. –Central prevention of of buffer overflows. –Architecture extensively simulated in VHDL to insure correct function under all conditions.

DAQ review Sep. 2001J.Christiansen/CERN3 General architecture L0 buffer L0 derandomizer L0 buffer L0 derandomizer L0 Buffer L0 Derandomizer L0 Trigger + Readout Supervisor L1 Trigger + Readout Supervisor L0 buffer L0 derandomizer L0 buffer L0 derandomizer Zero Suppression & Multiplexing L1 Buffer L1 Derandomizer Output Buffer L1 Throttle L0 Throttle Front-End systemTrigger & TFC system L0 L1 L1 Trigger L0 TriggerTTC system DAQ

DAQ review Sep. 2001J.Christiansen/CERN4 L0 front-end L0 buffer L0 derandomizer L0 buffer L0 derandomizer L0 Buffer L0 Derandomizer L0 Trigger L0 Derandomizer Emulator L0 Trigger Raw 40 MHz L0 Throttle Readout supervisor 36 words = 32 ch + 4 tags 4 µ s 16 events 15 events Max MHz L MHz L1 Trigger system L1 Buffer Monitor Raw 40 MHz Constant latency: 4.0 us Maximum 1.11MHz trigger rate 16 events deep L0 derandomizer. Events from L0 derandomizer defined to be max 36 40MHz Derandomizer overflows prevented by central emulator in Readout Supervisor, based on a set of strictly defined front-end parameters.

DAQ review Sep. 2001J.Christiansen/CERN5 L1 front-end L0 buffer L0 derandomizer L0 buffer L0 derandomizer Zero Suppression & Multiplexing L1 Buffer L1 Derandomizer Output Buffer Reorganizer L1 Trigger + commands L MHz Readout supervisor 1927 events 15 events CPU L0 Throttle Nearly full Board System L1 Trigger 34 40MHz per word L MHz Max 40 (100) kHz L1 Trigger Derandomizer L1 Buffer Monitor Spacer Command 2 us DAQ L1 throttle

DAQ review Sep. 2001J.Christiansen/CERN6 L1 front-end Variable latency. L1 trigger decisions distributed to front-end in chronological order via TTC broadcast message. ( for both accepts and rejects) L1 buffers in front-ends implemented as simple FIFOs. L1 buffer occupancy monitored centrally by Readout Supervisor that throttles L0 triggers in case of risk of overflow. L1 trigger decisions sent to the front-end at a rate that can be handled by all front-ends (no local buffering of trigger decisions needed) 15 events deep L1 derandomizer: –3 events to handle L1 throttle delay (2us) –12 events for derandomization L1 derandomizer and following data buffers protected against overflow by hardwired L1 throttle signal. Zero-suppression (sparcification) and event data formatting.

DAQ review Sep. 2001J.Christiansen/CERN7 Centralized front-end control: Readout supervisor Receives L0 and L1 trigger decisions from trigger systems. Only distributes trigger accepts to front-end that will not generate buffer overflows. L0 derandomizer overflows prevented by L0 derandomizer emulator. L1 buffer overflows prevented by L1 buffer emulator. L1 trigger decisions spaced to match processing speed of front-end Buffer overflows in L1 derandomizer and following buffers prevented by hardwired L1 throttle network. Resets, calibration signals, testing and debugging functions. High level of programmability to allow system level optimizations. L1 buffer emulator L0 trigger L1 trigger derandomizer L1 trigger decision L1 decision spacer TTC encoder L0 throttle L1 throttle L0 derand. emulator TTC distribution L0 trigger decision L1 trigger

DAQ review Sep. 2001J.Christiansen/CERN8 Front-end control and monitoring Clock synchronous control of front-end handled by Readout Supervisor via TTC system. Local monitoring in front-ends of buffer overflows and event consistency based on event tags. ( Bunch ID, L0 event ID, L1 event ID). Error conditions sets error flags in event fragments and sets status bits to Experiment Control System (ECS). Front-end parameters down loaded via ECS system ( With enforced read-back capability ). Standardized ECS interfaces for front-end: –Credit card PC –SPECS ( simple serial protocol ) –CAN ELMB ( from ATLAS )

DAQ review Sep. 2001J.Christiansen/CERN9 Detailed front-end architecture

DAQ review Sep. 2001J.Christiansen/CERN10 Interface to DAQ Interface between front-end and DAQ system is handled by Readout Units. Standardized event formatting on optical links from front-ends. Data bandwidth per front-end branch limited to ~25Mbytes/s under nominal conditions to have headroom for unexpectedly high channel occupancies and allow upgrade from 40 to 100 KHz trigger rate. L1 FE FEM L1 FE FEM L1 FE FEM L1 FE FEM RU Sub-detector N Event building network N+1 L2/L3 CPU farm L1 Front-End electronics Front-End Multiplexing (optional) Readout Units Transport header Transport trailer Event building header Event building trailer Event data Event data header Event data trailer Event data Event data header Event data trailer Fragment 0 Fragment N Event formatting

DAQ review Sep. 2001J.Christiansen/CERN11 Data Link from front-end to DAQ Standardized unidirectional optical link handling distances of up to 100m. –No Xon/Xoff backpressure foreseen. In some sub-detectors the link transmitters are located in the cavern with limited levels of radiation (few Krad). Required bandwidth: 10 – 50Mbytes/s. Use of S-link enables: –Flexibility in choice of link technology. –Use of standardized link interface cards. Standardization on Gigabit Ethernet. –Defacto standard in computer industry. –Event building in DAQ will be based on Gigabit Ethernet. –Many relatively cheap components available. –Gigabit Ethernet S-link transmitter under development in Argonne. –Question of framing overhead: Event data is not heavily concentrated in LHCb. Reduced Ethernet framing can be used on data to Readout Units.

DAQ review Sep. 2001J.Christiansen/CERN12 LHCb Front-end in numbers DetectorChannelsFE linksFEMsLink RUsEvent size Vertex 205k MB/s76k RICH 450k55010 MB/s1515k Inner tracker 220k MB/s1411k Outer tracker 120k MB/s3039k Muon 26k1008 MB/s32k Calorimeter 20k MB/s1012k DIV 51k Total 1041k k