Download presentation
Presentation is loading. Please wait.
Published byPhyllis Boone Modified over 9 years ago
1
LHCb DAQ Review, September 11-12 LHCb Timing and Fast Control System TFC Team: Arek Chlopik, Warsaw Zbigniew Guzik, Warsaw Richard Jacobsson, CERN Beat Jost, CERN Introduction to the TFC system Progress and status
2
LHCb DAQ Review, September 11-12 2 LHCb Read-out Read-out Network (RN) RU 6-15 GB/s 50 MB/s Variable latency L2 ~10 ms L3 ~200 ms Control & Monitoring LAN Read-out units (RU) Timing & Fast Control Level-0 Front-End Electronics Level-1 VDET TRACK ECAL HCAL MUON RICH LHC-B Detector L0 L1 Level 0 Trigger Level 1 Trigger 40 MHz 1 MHz 40 kHz Fixed latency 4.0 s Variable latency <1 ms Data rates 40 TB/s 1 TB/s 1 MHz Front End Links Trigger Level 2 & 3 Event Filter SFC CPU Sub-Farm Controllers (SFC) Storage Throttle Front-End Multiplexers (FEM) Unique feature : Two levels of high-rate triggers Level-0 (40 MHz --> 1.1 MHz Accept rate) Level-1 (1.1 MHz --> 40-100 kHz Accept rate)
3
LHCb DAQ Review, September 11-12 3 Timing and Fast Control Consists of: - RD12 TTC distribution system: TTCtx’s Tree-couplers TTCrx’s - Components specific to LHCb : Readout Supervisors TFC Switch Throttle Switches Throttle ORs
4
LHCb DAQ Review, September 11-12 4 Use of TTC l Timing, Trigger and Control distributed using the TTC system: è Channel A used to distribute (accept/reject signal) L0 trigger (40 MHz --> 1.1 MHz accept rate) è Channel B used to distribute (short broadcasts): L1 trigger (1.1 MHz --> 40-100 kHz accept rate) Bunch/Event Counter Resets Control commands (FE resets, calibration pulses) è Broadcast order is handled according to a priority scheme l Usage of the 6 (+2) user bits in the short broadcasts:
5
LHCb DAQ Review, September 11-12 5 LHCb specific components l Readout Supervisor “Odin” è all mastership in single module l TFC switch è Clock, trigger and command distribution and support partitioning l Throttle switches (L0 & L1) and Throttle ORs è Throttle feed-back
6
LHCb DAQ Review, September 11-12 6 Readout Supervisor “Odin” - Single module - Clock distribution LHC clock - L0 handling & distribution L0 - Auto-trigger generator Trigger generator - Trigger controller Trigger controller Throttles - Reset/cmd generator Reset/command generator - “RS Front-End” RS Front-End L0/L1 DAQ - ECS interface ECS interface ECS TTC encoder Ch A/B - TTC encoding - L1 handling & distribution L1 - L1 derandomization L1 Derandomizer Designed with emphasis on: Versatility - to support many different types of running modes Functionality easily added and modified.
7
LHCb DAQ Review, September 11-12 7
8
8 LHCb partitioning l Partition (TFC) Def. Generic term for a configurable ensemble of parts of a sub-detector, an entire sub-detector or a combination of sub-detectors that can be run in parallel, independently and with a different timing, trigger and control configuration than any other partition. l Option: 16 or 32 concurrent partitions l Crucial: Equal internal propagation delays. If skew too large, FE will suffer from timing alignment problems when using different RS’. Pool of Readout Supervisors Partition APartition B Front-Ends grouped by TTCtx/Optical couplers to partition elements TFC Switch TTC information encoded electrical
9
LHCb DAQ Review, September 11-12 9 Buffer overflows l Throttle signals fed back electrically to the RS in control of the partition with data congestion l Two Throttle Switches: è Throttle signals to the L0 trigger è Throttle signals to the L1 trigger l All Throttle Switches and ORs will log throttle history Pool of Readout Supervisors Partition APartition B Throttle Switch Front-Ends etc grouped by Throttle ORs i.e. ~Throttle Switches Throttle signals
10
LHCb DAQ Review, September 11-12 10 Progress and Status l In view of the TDR, the aims of this year are to: è Design the TFC components specific to LHCb è Review the TFC architecture and components è Layout the first prototype of the TFC Switch and the RS è Simulate the RS at several levels è Test the way the TFC architecture exploits the TTC system è Produce a first prototype of the TFC Switch and the RS è Test critical points of the TFC Switch and the RS l Overview of work schedule for 2001: l Except for delays in the area of testing TTC system, schedule well maintained.
11
LHCb DAQ Review, September 11-12 11 TFC Switch (Progress and Status) l Reviewed November 8, 2000 (together with TFC architecture and Throttle Switches) è very well received l First prototype was ready in April-May. Main aim with prototype was to measure the two crucial quantities: è Time skew between paths (aimed at <100 ps). è Jitter (aimed at 50 ps)
12
LHCb DAQ Review, September 11-12 12 TFC Switch (Progress and Status) l All measurements carried out successfully è Jitter at the output ~80 - 100 ps. Jitter from generator ~50 ps è Maximum skew between all inputs to each multiplexer is between 100 - 400 ps è Skews between between output paths (multiplexers to output) was very large (-> 4ns) è A few mistakes were discovered in the routing when equalizing the paths. The mistakes + dielectric characteristics can account for the skews measured on the input and the ouput paths. è The intrinsic propagation delay of the multiplexers vary between 400 - 1000 ps. Specs claim maximum 850 ps. è Comparing line lengths with measured propagation delays shows that the signal speed is ~40% slower than the “ideal” 5ns/m. This is consistent in all measurements. l The measurements show that the performance with respect to skew is not satisfactory. Solution: è Route all lines on board layers with equal dielectric characteristics è Add appropriate delay lines at the outputs to compensate for the inevitable intrinsic skew due to the components. Problem with temperature dependence of delay chips Each board needs calibration. è Input and output coupling capacitors with less tolerance to improve signal shape. l Switch has still to be tested with CC-PC and in full TTC setup. l The first prototype will be sufficient for tests of the first prototype of the RS
13
LHCb DAQ Review, September 11-12 13 RS “Odin” (Progress and Status) l Specifications ready end of last year - almost entirely based on FPGA l Specs, logical design and first draft of schematics reviewed April 4, 2001. è Very well received è Importance of simulation emphasized. l Specs have been simulated in a high level behavioral model with a behavioral model of the LHC machine, trigger decision unit, and FE, using Visual HDL
14
LHCb DAQ Review, September 11-12 14 RS “Odin” (Progress and Status) l The FPGA designs have been simulated using MaxPlus l To check the FPGA designs and crosscheck the MaxPlus simulation, some of the blocks have been simulated at gate level using Leapfrog. l The behavioral model of the LHC machine, the trigger decision units, and the FE have been refined in order to support a simulation of the real RS design. The behavioral model of the RS is currently being replaced in Visual HDL block by block by the FPGA designs at gate level including all delays. è The entire L0 path (except TTC encoder) has been simulated. Shows that the current design, using three or four clocks (different level of pipelining) works. I/O L0 pipeline L0 handling
15
LHCb DAQ Review, September 11-12 15 RS “Odin” (Progress and Status) l The interface to the L0 and the L1 trigger Decision Units have been agreed on. l RS Minimal Version currently in production: è Almost all functionality but not the “RS internal FE” and fewer counters. è Aim with first prototype is: Verify that the FPGAs are sufficiently fast with safe margin for the functions requiring synchronous operation. Measure performance and check concurrent operations of the many functions
16
LHCb DAQ Review, September 11-12 16 TTC tests (Progress and Status) l The need for 1.1 MHz short broadcast transmission on channel B is a crucial point to test. Lacking RS, a test bench was setup using existing equipment: l Using a scope (before the TTCpr was available) shows no problem transmitting 1.1MHz short broadcasts. 1.6 MHz was measured. Data integrity not tested! l Since the encoder circuit in the TTCvx will implemented in the RS and we will use TTCtx the test bench has also allowed us to gain experience and study the performance. l TTCpr is designed to receive ATLAS L1A: è Help from ATLAS to modify the code of the onboard FPGA to receive short broadcasts. è Two problems remain: The transfer of the short broadcasts into the host PC does not work properly. Testing the same throughput (1.1 MHz * 16 bits) using the ATLAS version of the FPGA (long broadcasts) shows problems above ~100 kHz. EventIDs show jumps. PC not capable to cope? ALEPH FICTTCviTTCvxTTCtx TTCpr VME
17
LHCb DAQ Review, September 11-12 17 Conclusions l LHCb TFC system architecture and specific components have been reviewed in two reviews. l The partitioning concept well integrated. l The first designs and layouts of the LHCb specific components are ready. l Detailed simulation of RS continuing. l The first prototype of the TFC Switch built and the first RS in production. l The results of the tests of the first TFC Switch are very useful. l The first tests with the TTC system show positive results. Work going on with the TTCpr. l When the RS is ready, it will replace the TTCvi + TTCvx in the test bench
18
LHCb DAQ Review, September 11-12 18
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.