Download presentation
Presentation is loading. Please wait.
1
A Possible 13 Electronics Architecture A Strawman Proposal Kelby Anderson for Jim Pilcher 30-Apr-2004
2
April 30, 2004J. Pilcher2 The Objective We should discuss and examine architecture Do this before any detailed design We need to get the high level features correct More people can contribute ideas to high level planning Once we agree on scope the detailed planning and design can begin Lots of work for anyone interested This talk is intended to provide a strawman plan for the architecture Other options can be compared to it Readout should take advantage of modern electronics developments Can do more with given budget now compared to 10 years ago This was when CHOOZ, SNO, KamLAND were designed
3
April 30, 2004J. Pilcher3 Electronics Requirements Digitize charge seen by each PMT Energy reconstruction Provide timing of signal from each PMT one component of position info (also energy sharing of PMTs) Provide trigger for DAQ Physics triggers Neutrinos (prompt EM energy, delayed neutron energy) Backgrounds (to study and subtract) Muons Electronic calibration triggers (variable test pulses) Source/laser/LED calibration triggers Random triggers
4
April 30, 2004J. Pilcher4 Electronics Requirements Provide HV to PMTs Provide LV for electronics Provide ability to control and monitor detector and electronics Temperatures LV, HV PLD firmware
5
April 30, 2004J. Pilcher5 Electronics Requirements Readout should not degrade intrinsic resolution Energy resolution eg. 7.5% / Sqrt[E(MeV)] 2% 5.7% at 2 MeV –This is KamLAND resolution. Perhaps we can do a bit better. Timing 1.0 ns (PMT jitter) – for Hammamatsu 5912, 8” PMT Energy readout must cover full dynamic range Low end Single photoelectron for individual PMTs –Assume 15 counts/pe High end Muon along diameter of detector –E max of 10 ? pe for closest PMTs (to be determined)
6
April 30, 2004J. Pilcher6 Attractive and Feasible Features Trigger separately on positron energy and neutron energy Link by recording time since last trigger Gives better handle on background effects Trigger should be able to impose loose time coincidence in case singles rate is too large In this case prescale single energy triggers Sample signals in time window around trigger event ( 2.5 s ) Earlier times provide input on possible backgrounds Later times provide link to neutron triggers within the same event (cross check of event timing)
7
April 30, 2004J. Pilcher7 Attractive and Feasible Features Do zero suppression and data filtering off-detector Take advantage of modern high-speed data links More flexibility in PC than in front-end hardware Provide independent electronic calibration for each channel over its full dynamic range (chg. injection) Allows injection of simulated events Allows easy tests for cross talk Allows precise electronic calibration of each channel In counts/pC Sources give pe/MeV and pC/pe Allows measurement of pulse shape by varying timing of injected signal for successive events
8
April 30, 2004J. Pilcher8 Front-end Convert PMT signals to “standard” analog shape Amplitude reflects charge from PMT Use low-noise passive shaper Fully linear Adapts PMT to speed of commercial sampling ADCs eg. 12-bit, 40 megasamples/sec (every 25 ns) –TI’s ADS5130 (12-bits, 50 MSPS) or AD’s AD9042 (12-bits, 40 MSPS)
9
April 30, 2004J. Pilcher9 Front-end Sample signal every 25 ns Synchronize all ADCs using optical timing system Laser driven optical fiber modulated by clock Control and trigger can be distributed with same system LHC Timing Trigger and Control (TTC) system Time resolution of clock at PMT ~100 ps Each clock pulse is “numbered” System is off the shelf with many utility modules and custom chips Provides ability to set the clock timing independently at each PMT Compensate for channel-to-channel delays Synchronize all PMTs using LED at center of detector –Transit time variations between PMTs from variation in HVs Save 200 samples in a pipeline ( 2.5 s ) Readout if trigger Overwrite if no trigger Use larger dual-port memory for dead-timeless operation
10
April 30, 2004J. Pilcher10 Front-end Digital pipeline systems built for LHC experiments Fewer samples but similar delay to trigger To obtain needed dynamic range use multiple gains High gain gives 15 counts/pe and 4095 counts (270 pe) full scale Low gain gives 128 counts (0.8%) at high gain maximum and 4095 counts (8,700 pe or 14,000 pC) full scale PMT has 5% non-linearity at 1,200 pC Readout will cover the useful range of PMT output Dynamic range of readout 17 bits or 102 dB These figures reduced a little by pedestal offsets Useful full scale reduced by ~ 30 counts Noise should be held to ~ 1 count to preserve dynamic range (waste of ADC resolution)
11
April 30, 2004J. Pilcher11 Front-end Standard pulse shape is fitted to extract amplitude and time offset with respect to sampling clock Demonstrated time resolution of electronics is < 100ps Feature extraction done off-detector Can be done in PC Digital signal processing modules used at LHC because of rate (100 KHz, Level 1 trigger rate)
12
April 30, 2004J. Pilcher12 Data Collection Attractive to use single optical or electrical data link to control room to facilitate connection/disconnection 500 Mbps is quite feasible with off-the-shelf components (doing this ourselves in ATLAS) Data per PMT channel per event 2x12 bits x 200 samples + 15% overhead (CRC, parity, identifiers) = 5.5 Kbits Data per detector per trigger 819 PMTs x 5.5 Kb = 4.5 Mbits One data link could handle 100 ev/sec Only needed for calibration –Could reduce width of time window when running calibrations Derandomizing buffer needed at link input for normal data
13
April 30, 2004J. Pilcher13 Detector Control System Attractive to have single control bus from counting room to each detector Facilitates connection/disconnection for moving detector Functions Set HV on PMTs Control electronics calibration system Monitor LV, HV, temperatures Allows bi-directional communication TTC system is uni-directional Requirements Must have high fanout capability (~ 800 PMT channels served by one bus)
14
April 30, 2004J. Pilcher14 Detector Control System Many commercial field bus systems eg. CANbus But fan-out capability could be a problem with this one
15
April 30, 2004J. Pilcher15 Single HV cable per detector Set HV to individual PMTs on detector HVs adjusted so all tubes have same gain Using laser ball or LED at center of detector Needn’t have fine control of HV over full range Just over range of PMT gain variation For constant term in energy resolution of 2 % and 820 PMTs we need individual PMT gains settable to 2%/ 820 = 0.07% For a PMT where is ~ number of stages For 10 stage tube –Need V to ~ 0.10 V out of 1500 V Must be able to switch off individual PMTs Non-trivial requirement HV System
16
April 30, 2004J. Pilcher16 Trigger System Base trigger on number of photoelectrons seen in detector Useful to have sums from segmented regions of detector Protect against localized hot spots Tile the detector into hexagonal patches Generate pe sum from each patch (26 patches of 32 PMTs = 832) How to do this? Traditional option is to add analog trigger signals, but … –Beware of analog offsets and common mode noise –Trigger signals have separate timing and gain from DAQ branch »Must have own calibration Better to use digital info from ADCs –Trigger signals then time-aligned with clock system –Need to use only coarse info from ADC (high order bits) –Add 32 digital signals for a patch to get local sum –Trigger signal assembled every 25 ns. (with latency) »May want running sum over several clock periods
17
April 30, 2004J. Pilcher17 Cost Estimate Cost per channel Analog front-end $100 (ATLAS $88) Digitizer and pipeline $130 (ATLAS $94) Clock timing and distribution $50 Trigger $80 HV power and distribution $60 (ATLAS $46) Control system $40 Miscel. on-detector plumbing $40 (ATLAS $35) LV power $20 ______________ $520 3 detectors @ 812 channels each $1.27M Add 30% for EDIA, 40% contingency $2.31M Overall constn. cost of experiment ~ $60M (readout is ~3.8%)
18
April 30, 2004J. Pilcher18 Conclusions These comments offer a possible architecture Point of comparison for other options Cost estimates rough but includes experience with LHC design (top down estimate OK at this point) Much work needed for detailed planning, design, and implementation Could add readout to Monte Carlo Evaluate granularity and trigger Important to agree on architecture before starting design Divide detailed planning, design, and implementation among interested groups
19
April 30, 2004J. Pilcher19 Conclusions There will be PLENTY to do
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.