Download presentation
Presentation is loading. Please wait.
Published byTeguh Muljana Modified over 6 years ago
1
ETD/Online Summary D. Breton, U. Marconi, S. Luitz
Frascati Workshop 06/2011
2
Synchronous, Pipelined, Fixed-Latency
Main items of discussion this week Synchronous, Pipelined, Fixed-Latency
3
Trigger The main remaining question here is linked to the trigger time jitter, which has a direct influence on the dataflow, especially for sub-detectors with short window and lots of data (like SVT) Readout Both DCH and EMC now have a dual path front-end with dedidated trigger and readout data treatment Trigger Readout Trigger
4
A possible sketch of trigger system in SuperB
DC and EMC trigger crates have a common interface (LVDS or optical) with pertaining sub-detectors. EMC(i) and DC(i) boards share a common hardware platform and only differ in firmware. Main remaining problem for the trigger is the slow signal from CSI which degrades the time resolution of the trigger At the moment one could immagine building a much faster trigger signal using only the first part of the energy deposit in the CsI Resolution would be worsened by the sqrt(energy fraction collected). But it looks like there is no crystal to play with. We are waiting for a small sample of BaBar crystals From Bill
5
Fast data links: error detection
Sergio Cavaliere presented algorithms and structures for error detection as a possible alternative to full error correcting codes This solution shows good results at a much lower hardware complexity and overhead. Managing errors may be done simply, discarding data farther in the receiving path or even in an off-line stage. This doesn’t lower significantly the data quality, because of the very low error rate, and seems to be the preferred solution Both CRC (Cyclic Redundancy Check) and various kinds of CHECKSUMS have been extensively evaluated and compared using both detection efficiency and overhead. CRC CHECKSUM Cavaliere - SuperB Workshop - may 2011
6
Fast data links: SERDES
The Napoli team introduced the idea of using Xilinx FPGA as SERDES for the different links (to replace TLK2711 (readout link) and maybe DS92LV18 (control link)). This was presented by Raffaele. => these circuits have to be fully validated for radiation environment Total dose should not be a problem, but mostly single event effects The latter may touch both the configuration memory and the active design In order to perform a first qualification, a method simulating the effect of single events has been presented They can be produced either by hardware of software means First simulation results are encouraging A real irradiation test in planned in Catania next July Using FPGAs for the links looks appealing: we are at a preliminary stage all pros and cons have to be listed and fully understood We also had a detailed presentation from Alberto with a summary of the technical R&D work already performed in Napoli and his views concerning the system integration and the sharing of responsibilities between different teams for the link interface integration in the different systems
7
Simulation of the derandomizer hardware Verilog model
The goal of the study is to get a first flavour of the necessary derandomizer depth in order to maintain a limited dead-time. This follows the simulations performed by Steffen (presented at LNF in April) Different parameters to be taken into account: 3 fixed parameters at the system level: Mean Trigger Rate : R = 150 kHz Minimum Distance between triggers : 70 ns ? , hopefully less … Required dead time : <1% 1 fixed parameter at the subdetector level: 4. Read out Window : W [Nb of 56MHz clock periods] Parameter to define properly at the subdetector level: 5. Nb of channels multiplexed at the output of the derandomizer to feed the link: N the mean link occupancy is then defined by : Ratio = Average Payload/ Nominal Capacity = R.W.N.32 bits / 1.8Gbit/s Then the goal is to optimize the derandomizer depth for the required dead time Jihane Maalmi –Dominique Breton- Elba - May 2011
8
Jihane Maalmi –Dominique Breton- Elba - May 2011
Preliminary simulation results Derandomizer Depth Trigger Mean Rate : 150 kHz , Min Inter Trigger Distance = 50 ns, Required dead time : between 0,9 % and 1,1 % W varying between 10 and 55 (180 ns to 990 ns); Mean Link Occupancy (or Ratio): between 0.5 and 0.95 Result : Derandomizer Depth [Nb of full events] Ratio Effect of the increase of the trigger rate x2 Effect of the min time distance between triggers Jihane Maalmi –Dominique Breton- Elba - May 2011
9
Conclusion about derandomizer study
We noticed an opposite behavior of the derandomizer optimisation for very different sub-detectors : short time window, high multiplexing factor and reduced pile-up long time window, low multiplexing factor and high pile-up => here, pile-up helps because if saturates the dataflow in a clean way! In the case of variable event size, should we base the derandomizer depth on: - The average size ? (less depth but more potential pile-up) - The maximum size ? (minimum dead-time but maximum depth) Importance of a fast throttle for the optimization of the derandomizer depth. Question raised: how to implement a model of the derandomizer if the event size is random (to build the throttle )? The worst case will depend on the sub-detector implementation We may need a fast direct throttle between FEE and FCTS Return path of clock and control links could be used therefore The final goal of the study is to give to subdetectors a table with the necessary derandomizer depth and link occupancy ratio with respect to the width of their own trigger time window, cross check results with Steffen’s.
10
ROM Since Caltech, new steps were made in understanding the solution to be described in the TDR. The current R&D is based on custom electronics and host PC/CPUs: FPGA being used to get data from the FEE (possibly perform synch. Processing) and inject them into the PC via PCIe. CPUs will perform complex data processing and data transfer, using standard protocol and on board network interface cards. Umberto presented the status of this R&D: using a commercial PCI-e demonstration board, FPGA-driven data transfers were performed through PCI-e up to 14.5Gbits/s towards a memory located on the motherboard (goal is > 10Gbits/s : NIC) => No showstopper for the R&D !
11
SVT FE Status SVT Baseline for TDR
Layer0 old beam pipe new beam pipe SVT Baseline for TDR - Striplets in R~1.5 cm Triggered FE chips - 5 layers of silicon strip modules Triggered or data push FE chips Upgrade Layer0 to thin pixel for full luminosity run - more robust against background occupancy Several progress on the baseline design in the last few months: Definition of the requirements for readout chips for striplets and strip: Need to develop 2 new chips since existent chips do not match all the requirements analog info high rates in inner Layers (0-3) & short shaping time, long shaping in Layers 4-5 to reduce noise for long modules. Started to evaluate if readout architecture developed for pixel can be used for strips: no evident showstop up to now Good indications that for L1 the pixel readout architecture can be reused fruitfully First estimate of noise vs shaping time in each layer done Data chain is progressing by defining all elements Technical analysis on HDI, Serializer, Tail and Power cables La Biodola 2011 Mauro Citterio
12
DCH HV side Preamp Boards Front-end side Gain ≈ 9 mV/fC
noise of ≈ 2500 erms BW ≈ 250 MHz Because of radiation background, we are evaluating the possibility of using FLASH based FPGA devices (ACTEL ProAsic devices) to implement the readout chain. A 1 ns resolution TDC has already been implemented and successfully simulated. Concerning CC we have identified 3 possible scenarios: + local features extraction (i.e. arrival time of individual clusters) flexible but require fast digitizers and Virtex6/7 devices just outside the detector (radiation environment + power dissipation problems) + buffers + data links and remote features extraction – flexible but requires huge amount of data links. Analog derivative method + local feature extraction – sensitive to S/N ratio Input signal S/N ≈ ∞ cluster detection efficiency ≈ 88% Input signal S/N ≈ 10 => cluster detection efficiency ≈ 55%
13
PID There are many ongoing developments on MaPMTs and electronics test setups in Bari, University of Maryland and Orsay, with encouraging preliminary results. The PM receiver board with front end ASIC architecture in discrete component is under design and will be produced soon. The design of the 100 ps SCATS TDC is done. The 16 channels layout is almost done and the post layout simulation will follow. On time for a submission in July. A design with FBLOCK fully equipped with FE boards and “DIRC like” crate controler is an option to be study taking into account mechanical, thermal and shielding issues. The new SAMLONG analog memory (3.2GS/s, 12 bits, 1024 cells/ch) has been fully characterized and will equip the new 16-channel channel waveform digitizer
14
<2 % Xtalk problem solved
EMC <2 % Xtalk problem solved after Test Beam with Shielding box. Good results from new BTF test beam CERN TB BTF TB 25 Ch Lyso FE Setup They are interested in exploring the possibility to use Xilinx FPGA in the Front End as in the past with new technology: Radiation Test and Application of FPGAs in the Atlas Level 1 Trigger, V. Bocci at al, 7° Workshop on Electronics for LHC Experiments, Stockholm, Sweden,September 2001
15
BaBar IFR upgrade: LST readout status report
New collaborators are joining the IFR workgroup from the: AGH University of Science and Technology Cracow University of Technology (CUT) Institute of Nuclear Physics Polish Academy of Sciences (INP PAN) 2010 Fermilab beam test results for the IFR prototype confirm the front-end and DAQ design principles Toward the final DAQ design: the selection of a suitable ASIC for the IFR front end cards is in progress: the “EASIROC” test system provided by the OMEGA group of the L.A.L. in Orsay we are designing toward locating the SiPMs in accessible areas within the IFR barrel the technical issue of large scale SiPM characterization and QC is being addressed by means of suitable ASICs we are starting rad-tolerant FPGAs and design techniques for radiation mitigation in the IFR front end and latency buffers an indication of the preferred location of the front end electronic crates and cable conduits has been given XVII SuperB Workshop - Elba May A.Cotta Ramusino for INFN-FE/Dip.Fisica UNIFE
16
Conclusion Following Steffen’s introduction to the ETD system for newcomers in the first session, we got interesting questions and lively discussions about our architecture Derandomizer simulations were fruiful in terms of understanding the effect of link occupancy and of different behaviors of subdetectors Throttling mecanism has to be farther investigated There is an urgent need for equiping a CSI crystal with a fast preamplifier in order to try to reduce the trigger jitter => influences the whole system There is a rising interest in qualifying RAM-based FPGAs for the use on detector We started collecting information about location and volume required for electronics => a synthesis will be done soon and transmitted to Bill Then we’ll need the radiation flux simulations for: checking if the envisaged locations are adequate with the level of complexity required for electronics defining the mitigation rules
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.