Testing, timing alignment, calibration and monitoring features in the LHCb front-end electronics and DAQ interface Jorgen Christiansen - CERN LHCb electronics.

Slides:



Advertisements
Similar presentations
A Gigabit Ethernet Link Source Card Robert E. Blair, John W. Dawson, Gary Drake, David J. Francis*, William N. Haberichter, James L. Schlereth Argonne.
Advertisements

20 Feb 2002Readout electronics1 Status of the readout design Paul Dauncey Imperial College Outline: Basic concept Features of proposal VFE interface issues.
28 August 2002Paul Dauncey1 Readout electronics for the CALICE ECAL and tile HCAL Paul Dauncey Imperial College, University of London, UK For the CALICE-UK.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
6 June 2002UK/HCAL common issues1 Paul Dauncey Imperial College Outline: UK commitments Trigger issues DAQ issues Readout electronics issues Many more.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
Gunther Haller SiD LOI Meeting March 2, LOI Content: Electronics and DAQ Gunther Haller Research Engineering Group.
Claudia-Elisabeth Wulz Institute for High Energy Physics Vienna Level-1 Trigger Menu Working Group CERN, 9 November 2000 Global Trigger Overview.
M. Lo Vetere 1,2, S. Minutoli 1, E. Robutti 1 1 I.N.F.N Genova, via Dodecaneso, GENOVA (Italy); 2 University of GENOVA (Italy) The TOTEM T1.
J. Christiansen, CERN - EP/MIC
Status of the Beam Phase and Intensity Monitor for LHCb Richard Jacobsson Zbigniew Guzik Federico Alessio TFC Team: Motivation Aims Overview of the board.
CERN Real Time conference, Montreal May 18 – 23, 2003 Richard Jacobsson 1 Driving the LHCb Front-End Readout TFC Team: Arek Chlopik, IPJ, Poland Zbigniew.
September 8-14, th Workshop on Electronics for LHC1 Channel Control ASIC for the CMS Hadron Calorimeter Front End Readout Module Ray Yarema, Alan.
LHCb-week June 2005 OT 1 Dirk Wiedner 1 MHz Readout and Zero Suppression for the Outer Tracker Dirk Wiedner, Physikalisches Institut der Universität Heidelberg.
TELL1 The DAQ interface board for LHCb experiment Gong guanghua, Gong hui, Hou lei DEP, Tsinghua Univ. Guido Haefeli EPFL, Lausanne Real Time ,
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
Muon Electronics Upgrade Present architecture Remarks Present scenario Alternative scenario 1 The Muon Group.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Features of the new Alibava firmware: 1. Universal for laboratory use (readout of stand-alone detector via USB interface) and for the telescope readout.
FED RAL: Greg Iles5 March The 96 Channel FED Tester What needs to be tested ? Requirements for 96 channel tester ? Baseline design Functionality.
FPGA firmware of DC5 FEE. Outline List of issue Data loss issue Command error issue (DCM to FEM) Command lost issue (PC with USB connection to GANDALF)
LHCb front-end electronics and its interface to the DAQ.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
1 07/10/07 Forward Vertex Detector Technical Design – Electronics DAQ Readout electronics split into two parts – Near the detector (ROC) – Compresses and.
Guido Haefeli CHIPP Workshop on Detector R&D Geneva, June 2008 R&D at LPHE/EPFL: SiPM and DAQ electronics.
All Experimenters MeetingDmitri Denisov Week of July 7 to July 15 Summary  Delivered luminosity and operating efficiency u Delivered: 1.4pb -1 u Recorded:
TDC in ACTEL FPGA Tom Sluijk Hans Verkooijen Albert Zwart Fabian Jansen.
L0 Workshop 17/10/2007 Wilco Vink / Martin van Beuzekom Leo Wiggers / Sander Mos 1 Commissioning/Status Pile-Up System.
CERN, 18 december 2003Coincidence Matrix ASIC PRR Coincidence ASIC modifications E.Petrolo, R.Vari, S.Veneziano INFN-Rome.
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
Links from experiments to DAQ systems Jorgen Christiansen PH-ESE 1.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Integration and commissioning of the Pile-Up VETO.
Common test for L0 calorimeter electronics (2 nd campaign) 4 April 2007 Speaker : Eric Conte (LPC)
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
18/05/2000Richard Jacobsson1 - Readout Supervisor - Outline Readout Supervisor role and design philosophy Trigger distribution Throttling and buffer control.
DAQ Overview + selected Topics Beat Jost Cern EP.
LHCb future upgrades, Electronics R&D, Vertex trigger, J. Christiansen LHCb electronics coordination.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
1 Timing of the calorimeter monitoring signals 1.Introduction 2.LED trigger signal timing * propagation delay of the broadcast calibration command * calibration.
SuperB DAQ U. Marconi Padova 23/01/09. Bunch crossing: 450 MHz L1 Output rate: 150 kHz L1 Triggering Detectors: EC, DC The Level 1 trigger has the task.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
FPGA based signal processing for the LHCb Vertex detector and Silicon Tracker Guido Haefeli EPFL, Lausanne Vertex 2005 November 7-11, 2005 Chuzenji Lake,
The Data Handling Hybrid Igor Konorov TUM Physics Department E18.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
Grzegorz Kasprowicz1 Level 1 trigger sorter implemented in hardware.
Electronics Trigger and DAQ CERN meeting summary.
96-channel, 10-bit, 20 MSPS ADC board with Gb Ethernet optical output
TELL1 A common data acquisition board for LHCb
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Vertex 2005 November 7-11, 2005 Chuzenji Lake, Nikko, Japan
TTC system and test synchronization
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
LHCb and its electronics
The LHCb Event Building Strategy
VELO readout On detector electronics Off detector electronics to DAQ
LHCb calorimeter main features
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
LHCb Trigger, Online and related Electronics
Testing, timing alignment, calibration and monitoring features in the LHCb front-end electronics and DAQ interface Jorgen Christiansen - CERN LHCb electronics.
Throttling: Infrastructure, Dead Time, Monitoring
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
New DCM, FEMDCM DCM jobs DCM upgrade path
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Presentation transcript:

Testing, timing alignment, calibration and monitoring features in the LHCb front-end electronics and DAQ interface Jorgen Christiansen - CERN LHCb electronics coordination With help from all my LHCb colleagues

LECC2006 J.Christiansen 2 Overview LHCb detector and its front-end electronics and DAQ TFC (TTC) architecture and consequences General features related to test, time alignment and monitoring –Time alignment –Calibration pulse injection –Consecutive triggers –Event and bunch ID event tagging –Optical links and BER –Pattern and spy memories –ECS access to local data buffers DAQ interface Date monitoring ECS Sub-detector specifics Summary and lessons learnt

LECC2006 J.Christiansen 3 LHCb experiment and its detectors Forward only detector to one side –Displaced interaction point –Looks like a fixed target experiment. –Gives good access to detectors/electronics Vertex and pileup in secondary machine vacuum 12 different detectors: –Vertex, Pileup, Trigger/Silicon tracker, Outer tracker, RICH, Ecal, Hcal, Preshower, Scintillating pad, Muon wire chambers, Muon GEM Some sub-detectors use same front-ends: Vertex, Pileup, trigger tracker and silicon tracker: Beetle Ecal & Hcal Muon wire chamber and GEM –7 different front-end implementations, 3 L0 trigger sub-systems. ~1 Million channels Trigger rate: 1MHz

LECC2006 J.Christiansen 4 Front-end architecture L0 front-end –On-detector (radiation !) –Detector specific, based on ASIC’s –4us latency, 1MHz trigger rate –16 deep derandomizer L1 front-end = DAQ interface –In counting house (no radiation) –Common implementation (one exception) –Event reception and verification, zero- suppression, data formatting, data buffering and trigger throttle, DAQ network interface L1 trigger buffering abandoned last year and now readout to DAQ at 1MHz (cost and availability of GBE switch). Standardized interfaces –TTC (Timing and Trigger Control) –L0 to L1 front-end and L0 trigger GOL based optical links (with one exception) –ECS control interfaces: CAN, SPECS, Credit card PC –DAQ interface (Gigabit Ethernet) Front-end architecture and related requirements documented in: L0-FE: n/others/LHB/public/lhcb pdf n/others/LHB/public/lhcb pdf L1-FE:

LECC2006 J.Christiansen 5 DAQ ~300 DAQ interface modules (TELL1) with 4 gigabit Ethernet ports each. –In most cases only 2 GBE ports needed per module ~2000 CPU’s in ~50 sub-farms Large 1260 Port Ethernet switch. Originally two separate data streams for L1 trigger and HLT Now full readout at 1MHz: 56kbyte x 1MHz = 56Gbytes/s Merging of multiple (8-16) events in Multi Event Packets (MEP) to reduce Ethernet transport framing overhead No re-transmission of lost packets –Switch must have low packet loss even at high link load (80%) with the particular traffic pattern we have (all event fragments from all sources for same MEP must go to same CPU). –We do not mind loosing a few events now and then as long as it is below the % level and that handling of non complete events does not generate problems Both static or dynamic CPU load balancing possible (readout supervisor) –Destination of each Multi event packet transmitted from readout supervisor to DAQ interfaces via TTC Switch can be duplicated if more bandwidth needed

LECC2006 J.Christiansen 6 TFC architecture and consequences Timing and fast control (TFC) system architecture have implications for LHCb front-ends. Only one active global controller when running whole experiment. –All sub-detectors must interpret TTC commands in the same manner (Triggers, trigger types, calibration, commands, event destinations, delays, etc) –No sub-detector specific commands LHCb Readout supervisor

LECC2006 J.Christiansen 7 Test and monitoring features in front-end Common features Data flow –B-ID and Event ID –Machine and event separators Time alignment Calibration pulse injection Consecutive triggers Link test Optional ECS access to L0 pipeline ECS access to derandomizer Test pattern memories (L0 trigger) TTCrx Mux Test pattern memory Calibration pulse ECS rd/wr ECS rd/wr L0 proc 16 consecutive triggers ECS L0 trigger Clk Brd L0 pipeline L0 derand. Analog Power Standard front-end architectureTest and calibration features TELL1/RICHL1 ECS CAN/SPECS TFC Detector L0 front-end electronics Time alignment Read-back test Mux link GOL test Mux link GOL L0 trigger ECS Monitoring features Error monitoring Power monitoring Temperature monitoring B-IDL0-ID

LECC2006 J.Christiansen 8 Timing alignment “Classical” use of TTCrx programmable delay features. –Plus sub-detector specific programmable delays –Global alignment by readout supervisor (remember only one) –Pipeline buffer lengths can in all sub-detectors be adjusted if needed. First iterations with test pulse injection –As test pulse injection also has unknown delays (at least partly) so this will not give final time alignment but a coarse alignment can be made and the software to make global time alignment can be tried out over different delay configurations of the test pulse injection. No use of cosmic muons in LHCb –Underground and horizontal Final time alignment only with real beam –Classical use of correlations between channels and sub-detectors Software for this still to be finalized and tried out at system level –Specific trigger for isolated interactions Start with simple interaction trigger from hadron calorimeter that can be internally time aligned based on a LED light pulse injection system with known delays. Use of consecutive triggers for first coarse alignment Large statistics can be collected in LHCb DAQ with 1MHz trigger rate –Muon detector parts with large out of time background hits has specific time histogramming features in front-end to make time histograms at full 40MHz rate. –We obviously hope that final time alignment will not take too much of our first precious beam collisions.

LECC2006 J.Christiansen 9 Calibration pulse injection Calibration pulse injection in sub-detectors may be very different: –In detector: LED into PM (Cal) –In analog front-end: charge injection via capacitor (Beetle and RICH) –In digital part: Injection of given digital pattern (L0 trigger) –Variable amplitude / fixed amplitude –Enable per channel or per group Do not inject in all channels at same time as this will give problems for front-ends and DAQ –Some specific artifacts: Falling edge of calibration pulse also gives charge injection, Inverted charge between neighbor channels. –Some sub-detectors wants to use this during active running for monitoring and calibration purposes, others do not. Round robin between channels to cover all channels –Some wants to readout raw non zero-suppressed data for such events, others wants normal zero-suppressed. –Phase shifting (TTCrx or delay25 or other specific delay features) When running local tests sub-detectors can use features freely When running with other detectors everybody must comply strictly with common rules. Fortunately we defined a common calibration pulse injection scheme from the start that all sub-detectors have managed to comply to. TTC broadcast calibration pulse injection message with four types. –1 type general purpose with predefined timing –3 types reserved (Only one of these requested for specific local test) –Some timing constraints because broadcast message used During normal running –During large bunch gap to have no real hits in addition to calibration pulse –Not all detectors wants this: local disable of pulse injection. –Not all channels in one go: local round robin scheme from one pulse to the next –Possible non zero-suppressed readout: Specific trigger type allows DAQ interface to skip zero-suppression (sub-detector dependent) Not yet verified with large multi system tests. TTC broadcast L0 trigger Calibration pulse Trigger latency Delay adjust in TTCrx up to 16 clock cycles No other calibration pulse broadcast can be made TTCrx Mux Test pattern memory Calibration pulse Brd Analog TFC Detector L0 pipeline L0 trigger

LECC2006 J.Christiansen 10 Consecutive triggers Gaps between L0 triggers would imply 2.5% physics loss per gap at 1MHz trigger rate. –This we could obviously survive, but such things can start complicated and “religious” discussions. Consecutive triggers very useful for testing, verification, calibration and timing alignment of detectors and their electronics. –This is really the major reason we decided for this (my private point of view) Problematic for detectors that need multiple samples per trigger, detectors with drift time (OT: 75ns ) or signal width longer than 25ns (E/H cal). –All sub-detectors agreed that this can be handled –Recently realized that one sub-detector can not handle consecutive triggers because of bug in front-end chip (needs gap of one). We still plan to use consecutive trigger for verification and time alignments (excluding sub-detector with specific problem) Pulse width and spill-over Baseline shifts Max 16 consecutive triggers (given by L0 derandomizer size) Time alignment

LECC2006 J.Christiansen 11 Test and monitoring features in DAQ interface Common module based on common VHDL framework BER link test ECS access to input data ECS access to output buffer DAQ loopback and packet mirroring Extensive monitoring counters –Errors, events, words, etc. Few different trigger types to allow trigger type dependent processing (e.g. calibration, empty bunch for common modes, etc.) Sub detector specific Zero-suppression Multiplexed analog links from Vertex (plug-in card) RICH detector have chosen to implement their own module with similar features.

LECC2006 J.Christiansen 12 Event and Bunch ID tagging synchronization and event monitoring L0 trigger –L0 trigger data on links carry a few bits (typically 2 LSB) of Bunch ID to allow continuous synchronization verification. –Specific separation word/character used on links in large machine gap for resynchronization Readout –Events must at L0 trigger accept be assigned Bunch ID and Event ID to be carried to DAQ interface for continuous synchronization and data verification. Bunch ID and Event ID width may wary between sub-detectors 4 – 12 bit Beetle front-end chip uses a 8bit buffer address (combined pipeline and derandomizer) which also allows continuous synchronization verification –But requires a reference Beetle chip to be used on each DAQ interface module as algorithm to generate the reference not easy to generate locally in FPGA. Event separator between event fragments to assure event synchronization in case of bit/word errors on readout links. –Only 1 word required to re-sync link after bit/word error Derandomizer link GOL Trigger B-IDL0-ID L0 front-end TTCrx (TELL1) TFC DAQ interface link Sync. Event verification Event input monitoring Event reference DataB-IDDataB-IDDataB-IDDataB-IDDataB-IDDataB-ID Machine cycle Data Header Data Trigger Readout Event fragment

LECC2006 J.Christiansen 13 Optical links ~7000 optical links: L0 trigger and Readout links GOL radiation hard serializer from CERN-MIC at 1.6Gbits/s –Single link transmitters with VCSEL directly modulated by GOL –12 way link transmitters using fiber ribbon optical modules 12way fiber ribbon receivers –Deserializers: TLK2501, Stratix GX FPGA, Xilinx Vertex 4 FPGA. Standardized (and simplified) LHCb qualification and test procedure –GOL built in test pattern generator (counter, was not documented) –Bit error rate below with additional 6db optical attenuation (1/2 hour test) Also planned to be performed for all links installed –More details: –BER to be measured for 9db and 12db attenuation Link errors (bit, word and desync) could be problematic in final system – – BER would give 1 – 10 link errors per second ! –One 32bit idle word enough to resynchronize links (if PLL has not lost lock) –L0 trigger links resynchronized in the large LHC bunch gap –Readout links have one idle word between event fragments transmitter MPO-MPO Wall MPO-SC Cassette 8 ribbon cables 12 way fiber ribbon receiver 12 links ribbon cables Breakout cable 30 m60 m10 m0.5 m

LECC2006 J.Christiansen 14 DAQ interface: TELL1 Standardized interface from sub-detector front-ends to DAQ system –350 modules –Adaptation to sub-detectors via a common VHDL framework –Dedicated RICH detector module Input: Plug-in cards –A: 2 x 12 optical links at 1.6GHz –B: 4 x 16 differential analog input with 10bit ADC (vertex) Control: TTC and Credit card PC –Local Credit card PC with Linux can perform intelligent local monitoring (in most cases not yet fully defined) (put PC on board instead of plugging board in PC) Output: 4 gigabit Ethernet port (copper) plug-in card

LECC2006 J.Christiansen 15 DAQ network interface GBE plug in card –4 Gigabit Ethernet copper ports –Bidirectional: Only up link used during normal running but down link useful for testing. ~1200 port DAQ network switch –Each port typically goes via two patch cords and a long distance connection: several thousand cables and connectors Standard commercial quad MAC and PHY interface chips. –Built in monitoring features Accessible from ECS via local CC-PC –Built in testing features MAC: Loop back on individual ports at multiple levels, etc. PHY: Cable tester features, etc. Packet mirroring –Sending encapsulated MEP block from DAQ to TELL1 DAQ interface with information about which port to return event fragment from and to which destination. Checks DAQ network infrastructure Checks GBE plug-in cards Checks connections from GBE card to/from TELL1 board. Can also be used to check event building in CPU farm. DAQ Network PHY MAC Packet mirroring TELL1 FPGA CC-PC ECS CPU

LECC2006 J.Christiansen 16 ECS = Experiment Control System ECS interfaces vital for reliable running and access to monitoring, test and debugging features. All configuration registers MUST have read back –To verify correct function of ECS itself –To verify interfaces to front-end chips and modules –To verify if configuration corrupted by SEU In radiation areas ECS interfaces with triple redundancy. (with some exceptions) Allow alternative readout path for data in local buffers –In some cases L0 pipeline or L0 derandomizer can be read via ECS. –Access to specific spy memories (e.g. L0 triggers) –Access to DAQ interface input and output buffers Monitoring while running –Error status registers –Word, event, etc. counters. –Environment monitoring (temperatures, supply voltages, etc.) Some confusion of what is covered by Detector Safety System (DSS) (now clarified in EDMS document: )

LECC2006 J.Christiansen 17 Data monitoring in DAQ Synchronization verification: Event and bunch ID Dedicated triggers for monitoring: –Random –Bunch gap –Calibration pulse injection –Etc. –Analyzed on specific sub-farms. Continuous histogramming for data quality monitoring –Channel occupancies –Tracking –Trigger efficiencies –Etc.

LECC2006 J.Christiansen 18 Sub-detector specific Vertex –Pulse injection: Inverting pulse injection between channels and between consecutive pulses. –Analog links with limited test capabilities –Use of pipeline column number for event verification Pileup: –Like Velo for analog readout –For trigger part use of “standard” optical links plus pattern generation and spy memories Silicon tracker –Like Velo but uses “standard” optical links Outer tracker –Pulse injection: Two fixed amplitude test pulses (binary TDC readout) –L0 derandomizer can be read via ECS. –Specific test patterns can be loaded into pipeline buffer. RICH –Pulse injection: Programmable amplitude Laser light injection to calibrate magnetic field distortions in Hybrid Photon Detector –Problems with consecutive triggers –RICH specific DAQ interface module Calorimeters –LED light injection: Variable amplitude LED light injection to monitor stability of PM gain Needs calibration monitoring while running Absolute Hcal time alignment for initial interaction trigger –Extensive spy buffers and pattern memories on L0 trigger and readout paths Muon –Pulse injection: Fixed amplitude test pulse. –L0 derandomizer can be read via ECS. –Local rate counters (before trigger) –Time histogramming memory in front-end (before trigger) –Extensive spy buffers and pattern memories in L0 muon trigger

LECC2006 J.Christiansen 19 Summary and lessons learnt Front-end requirements for calibration, testing and continuous data verification features must be defined early so it can be taken into account in design of custom made front-end chips. –Common approaches across sub-detectors very valuable –Even LHCb use LHCb specific custom made front-end chips for all sub-detectors. Some sub-detectors initially thought (and insisted on) that they did not need test pulse injection: Wrong ! –Local and system tests and verifications without beam, Full chain tests, Verification of individual channels, first trial to make time alignment, etc. –LHCb can not use cosmic muons for commissioning as underground forward only –Use during active physics running must also be well planned (not only local test and calibration). It was only relatively late that we realized that a standardized optical link test with BER would be needed (jitter sensitive, fragile, sensitive to dust, many connectors, etc.). –Fortunately GOL had built in pattern generator (which was not really documented so we may be the only real user of this ?) With our TFC architecture with only one TTC master it is extremely important that everybody interprets all TTC commands correctly –Tests across sub-systems will in most cases not be made before final commissioning (at least for LHCb) Environment monitoring (temp., voltage, etc.) often a bit forgotten and added at the end. –DSS also to be kept in mind (completely separate from ECS !) Test, verification and calibration also needs significant software (often comes at the end) There is probably a few testing/verifications features that we will regret that we did not include/enforce from the beginning (next year will show).

LECC2006 J.Christiansen 20 More information More information about Testing, timing alignment, calibration and monitoring features in the LHCb front- end electronics and DAQ interface can be found in: