Download presentation
Presentation is loading. Please wait.
1
TELL1 A common data acquisition board for LHCb
Guido Haefeli, University of Lausanne Guido Haefeli
2
Outline LHCb readout scheme LHCb data acquisition Optical links
Event building network Common readout requirements Trigger rates Buffers Bandwidth Data flow on the board Synchronization Level 1 trigger pre-processing and zero-suppression Higher level trigger processing Gigabit Ethernet interface Board implementation FPGAs Level 1 buffer Higher level trigger multi event packet buffer Summary
3
LHCb trigger system L0: fully synchronous and pipelined fixed latency
Pile-Up Calorimeter Muon L1: software trigger with maximal latency VELO TT (Outer Tracker) HLT: software trigger Access to all sub-detectors
4
LHCb data acquisition Front End of detectors in cavern 60-100m TELL1
in counting room
5
Optical link implementation
6
Event building network
TELL1 HLT Traffic 40KHz MEP /16 Mux x8 Level-1 Traffic 1.11MHz MEP /32 Mux x2 SFC /94 Front-end Electronics FE FE FE FE FE FE FE FE FE FE FE FE TRM Multiplexing Layer Commercial network equipment Switch Switch Switch Switch Switch Readout Network L1-Decision Sorter 94 Links 7.1 GB/s … 94 SFCs SFC Switch CPU SFC Switch CPU SFC Switch CPU SFC Switch CPU CPU Farm Gb Ethernet Level-1 Traffic Mixed Traffic HLT Traffic ~1800 CPUs
7
Trigger rates and buffering
Max. L0 Accept rate = 1.11 MHz Max. L1 Accept rate = 40 KHz L0 buffer is implemented on the Front End fixed to be 160 clock cycles ! L1 buffer events which equals to 52.4us !
8
Bandwidth requirements
Input data bandwidth for a 24 optical link motherboard Optical receiver 24 fibres x 1.28 Gbit/s 30.7Gbit/s Analog receiver 64 channels x 40 MHz 25.6 Gbit/s L1 Buffer Write data bandwidth 30.7 Gbit/s Read data bandwidth 4 Gbit/s DAQ links 4 Gigabit Ethernet links ECS 10/100 Ethernet for remote control
9
A bit of history L1 trigger scheme changed
During the last year the maximal L1T latency has increased from 1.8ms to 52ms (x32). This forces the change SRAM FIFO SDRAM Detectors added to L1T (TT, OT) and potentially others Decreasing cost for the optical links data processing is done in the counting room More and more functionalities on the read out board because no Readout Unit and no Network Processors! The event fragments are packed in so called “Multi Event Packets” MEP to optimize ethernet packet size and packet rate. The acquisition board adds IP destination, does Ethernet framing and transmit data buffering …)
10
How can we make a common useful read out ?
FE FE FE FE Adaptation to two link system is possible with receiver mezzanine cards. A-RxCard A-RxCard O-RxCard PP-FPGA PP-FPGA PP-FPGA PP-FPGA FPGAs allow the adaptation for different data processing. L1B L1B L1B L1B SyncLink-FPGA Sufficient bandwidth for the entire acquisition path ECS TTCrx RO-Tx FEM Mezzanine card for detector specific needs ECS TTC L1T HLT L0 and L1 Throttle
11
Advantages being common !
FE FE FE FE Solution and consents finding for new system requirements much easier. A-RxCard A-RxCard O-RxCard PP-FPGA PP-FPGA PP-FPGA PP-FPGA Cost reduction due bigger quantity serial production (300 boards for LHCb). L1B L1B L1B L1B SyncLink-FPGA Reduce maintenance cost with a single system. ECS TTCrx RO-Tx FEM ECS TTC L1T HLT L0 and L1 Throttle Common software interfaces.
12
Cluster encapsulation
L1T dataflow X12 PP-FPGA SyncLink-FPGA FIFO 32 L1T Link 64 O-RxCard PP-FPGA L1T DEST FIFO L1T IP RAM L1T Framer 32 L1T MEP Buffer 64 64 KByte internal SRAM @ 100 MHz ECS broadcast TTC PP-FPGA PP-FPGA Data Link FIFO 32 ID Check 8 FIFO L1PP Cluster encapsulation 32 FIFO L1B RO-Interface POS-Level 3 32 Shared data path for 2 channel RO-TxCard @ 100 MHz O-RxCard (Mezzanine card) X12 32 Link DDR @80MHz FIFO 16 64 Kbyte Sync FIFO 16
13
HLT ZeroSupp Event Encaps. O-RxCard (Mezzanine card)
HLT dataflow 0.9us/event 20us/event 320us/MEP PP-FPGA, Altera Stratix 1S20, 18K LE SyncLink Stratix 1S25, 25K LE X12 16 FIFO 32 O-RxCard PP-FPGA broadcast TTC 16 FIFO 32 ECS PP-FPGA HLT IP RAM HLT DEST FIFO HLT ZeroSupp Event Encaps. 16 FIFO 32 Sync FIFO ID Check 32 16 FIFO 32 FIFO 32 HLT Framer RO-Interface POS-Level 3 X12 HLT MEP Buffer O-RxCard (Mezzanine card) 32 32 32 32 Sync FIFO ID Check 32 16 FIFO 32 L1B Data link 32 FIFO 16 FIFO 32 4 Kbyte Shared data path for 2 channel RO-TxCard @ 100 MHz 1 Kbyte 1 Kbyte FIFO 32 16 Sync FIFO ID Check FIFO 32 16 Sync FIFO Link DDR @80MHz 1 Mbyte external QDR SRAM @ 100 MHz @80MHz @120MHz @120MHz 64 KEvent DDR SDRAM
14
Prototyping Motherboard specification, schematics and layout is finished Daughtercard design: Pattern generator card (available) 12 way optical receiver card (design finished) RO-TxCard is implemented as a dual Gigabit ethernet (see talk from Hans in this session) CCPC and GlueCard for ECS Test system PCI based data generator card Gigabit ethernet connection to PC
15
Technology used FPGA Stratix 1S20 , 780-pin FBGA Stratix 1S25 , 1020-pin FBGA Main features used: Memory blocks 512Kbit,4Kbit and 512bit DDR SDRAM interface (dedicated circuit) DDR I/Os Terminator technology for serial and parallel termination on chip. DSP blocks for L1T pre-processing DDR SDRAM running at 240 MHz data transfer rate (120 MHz clock) for L1B QDR SRAM running at 100 MHz for HLT MEP buffer 12 layer PCB (50Ohm)
17
DDR bank data signal layout
Equal length signal traces are required for DDR SDRAM Implementation (4 x 48-bit wide 240MHz data transfer rate ! (46 Gbit/s) Guido Haefeli
18
14cm
19
Summary After evaluating different concepts for data processing and acquisition a common read out board for LHCb has been specified and designed. It serves for 24-optical with a data transfer rate of 1.28 Gbit/s each. The board implements data identification, L1 buffering an zero suppression. It is made for the use with standard Gigabit ethernet equipment.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.