C. F. Bedoya. C. F. Bedoya April 9th, 2013 2 LS1 projects: * Theta TRB: (talk from Fabio). * SC relocation (next slides) Phase 1 Trigger : * L1 TDR finalized.

Slides:



Advertisements
Similar presentations
C. Fernández Bedoya, A. Navarro, I. Redondo. C. Fernández Bedoya November 29th, Goal: Replace SC crate with simple CuOF electronics and place TSC.
Advertisements

O. Buchmueller, Imperial College, W. Smith, U. Wisconsin, UPO Meeting, July 6, 2012 Trigger Performance and Strategy Working Group Trigger Performance.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Uli Schäfer 1 JEM: Status and plans JEM1.2 Status Test results Plans.
GEM Design Plans Jason Gilmore TAMU Workshop 1 Oct
O. Buchmueller, Imperial College, W. Smith, U. Wisconsin, August 3, 2012 Trigger Performance and Strategy Working Group Trigger Performance and Strategy.
Barrel Maintenance Scenario for LS1 MUON-IB CMS week 11 December 2012 A. Benvenuti (INFN Bologna)
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
The Track-Finding Processor for the Level-1 Trigger of the CMS Endcap Muon System D.Acosta, A.Madorsky, B.Scurlock, S.M.Wang University of Florida A.Atamanchuk,
Emulator System for OTMB Firmware Development for Post-LS1 and Beyond Aysen Tatarinov Texas A&M University US CMS Endcap Muon Collaboration Meeting October.
The CMS Level-1 Trigger System Dave Newbold, University of Bristol On behalf of the CMS collaboration.
Bologna, 10/04/2003 Workshop on LHC Physics with High P t Muon in CMS R.Travaglini – INFN Bologna Status of Trigger Server electronics Trigger boards TB.
PicoTDC Features of the picoTDC (operating at 1280 MHz with 64 delay cells) Focus of the unit on very small time bins, 12ps basic, 3ps interpolation Interpolation.
Leo Greiner IPHC meeting HFT PIXEL DAQ Prototype Testing.
Evaluation of the Optical Link Card for the Phase II Upgrade of TileCal Detector F. Carrió 1, V. Castillo 2, A. Ferrer 2, V. González 1, E. Higón 2, C.
CSC Endcap Muon Port Card and Muon Sorter Upgrade Status May 2013.
CPT Week, April 2001Darin Acosta1 Status of the Next Generation CSC Track-Finder D.Acosta University of Florida.
Muon Electronics Upgrade Present architecture Remarks Present scenario Alternative scenario 1 The Muon Group.
Upgrade of the CSC Endcap Muon Port Card and Optical Links to CSCTF Mikhail Matveev Rice University 17 August 2012.
NA62 Trigger Algorithm Trigger and DAQ meeting, 8th September 2011 Cristiano Santoni Mauro Piccini (INFN – Sezione di Perugia) NA62 collaboration meeting,
Upgrade of the CSC Endcap Muon Port Card Mikhail Matveev Rice University 1 November 2011.
Status and planning of the CMX Wojtek Fedorko for the MSU group TDAQ Week, CERN April , 2012.
CSC ME1/1 Upgrade Status of Electronics Design Mikhail Matveev Rice University March 28, 2012.
Ideas about Tests and Sequencing C.N.P.Gee Rutherford Appleton Laboratory 3rd March 2001.
C. F. Bedoya. C. F. Bedoya May 13th, * The goal was to establish a baseline plan for DT in Phase 2, (although some aspects of phase 1 also needed.
1 DT critical items status: HVBoards Control and Trigger Boards Assembly in Torino fgasparini june 2004.
Dec.11, 2008 ECL parallel session, Super B1 Results of the run with the new electronics A.Kuzmin, Yu.Usov, V.Shebalin, B.Shwartz 1.New electronics configuration.
PHASE-1B ACTIVITIES L. Demaria – INFN Torino. Introduction  The inner layer of the Phase 1 Pixel detector is exposed to very high level of irradiation.
John Coughlan Tracker Week October FED Status Production Status Acceptance Testing.
1 07/10/07 Forward Vertex Detector Technical Design – Electronics DAQ Readout electronics split into two parts – Near the detector (ROC) – Compresses and.
C. Fernández Bedoya on behalf of DT Upgrade group.
CERN, 18 december 2003Coincidence Matrix ASIC PRR Coincidence ASIC modifications E.Petrolo, R.Vari, S.Veneziano INFN-Rome.
FVTX Electronics (WBS 1.5.2, 1.5.3) Sergey Butsyk University of New Mexico Sergey Butsyk DOE FVTX review
Phase 2 muon plenary, 11-Feb-2015 Jay Hauser  Based on three main arguments:  Age/reliability  Radiation tolerance  Maintainability  Bonus: L1 trigger.
Upgrade of the CSC Endcap Muon Port Card with Spartan-6 FPGA Mikhail Matveev Rice University 30 April 2012.
General Muon Meeting, 03-March-2014 Jay Hauser “Update on Phase 2 muon Technical Proposal”  Recent developments:  ME0 inserted into HGCAL as for other.
Common test for L0 calorimeter electronics (2 nd campaign) 4 April 2007 Speaker : Eric Conte (LPC)
Could we abandon the HW LLT option and save money (and allow possible further improvements to the upgraded muon system)? Alessandro Cardini of behalf of.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
1 Towards Phase 2 TP releases 2013 Didier Contardo, Jeff Spalding Status of current simulations studies and ongoing issues Needs for TP preparation.
Trigger efficicency on double tracks Trigger efficiency on isolated tracks Final results of the Trigger Test on the 25ns beam Bx NUMBER Fraction of Triggers.
Fernández-Bedoya C., Marín J., Oller J.C., Willmott C. 9 th Workshop on Electronics for LHC Experiments. Amsterdam.
Joerg Dubert’s Questions as you all aware we currently do not yet know how the final mechanical design of the new Small Wheel will look like. Nevertheless,
DT UPGRADE STRATEGY M.Dallavalle for the DT Collaboration 6/26/12 Muon IB1.
Preparations to Install the HBD for Run 6 Craig Woody BNL PHENIX Weekly Meeting January 26, 2006.
Effects of Endcap Staging/Descoping D.Acosta University of Florida.
1 Drift Tubes TC activities up to LS2 1.Implement remaining Phase 1 upgrades :  in USC ⁻ TwinMux ⁻ microROS 2.Refurbish HV & LV Power Supply Systems ⁻
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
New ROS design + gradual installation New TSC design + gradual installation Bottleneck at increased luminosities Improves maintainability and performance…
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
Off-Detector Processing for Phase II Track Trigger Ulrich Heintz (Brown University) for U.H., M. Narain (Brown U) M. Johnson, R. Lipton (Fermilab) E. Hazen,
CMS Drift Tubes Gruppi di Bologna, Padova, Torino G.M. Dallavalle
ETD meeting Architecture and costing On behalf of PID group
PID meeting SNATS to SCATS New front end design
IOP HEPP Conference Upgrading the CMS Tracker for SLHC Mark Pesaresi Imperial College, London.
Alberto Valero 17 de Diciembre de 2007
The Ohio State University
Update on CSC Endcap Muon Port Card
Phase 2 Muon Electronics
CMS November Upgrade Week
DT Consolidation Summary
CMS EMU TRIGGER ELECTRONICS
ATLAS L1Calo Phase2 Upgrade
CMS SLHC Calorimeter Trigger Upgrade,
PID meeting Mechanical implementation Electronics architecture
August 19th 2013 Alexandre Camsonne
ETD parallel session March 18th 2010
SVT detector electronics
for the trigger subsystem working group:
Presentation transcript:

C. F. Bedoya

C. F. Bedoya April 9th, LS1 projects: * Theta TRB: (talk from Fabio). * SC relocation (next slides) Phase 1 Trigger : * L1 TDR finalized and now under review. Goal is to make a unified trigger system both in HW and allow to combine information from different detectors at the earlier possible time * Twin Mux board: (pre-DTTF board) A. Triossi started tests of reading trigger inputs with Avago+FPGA * Algorithms: DT+ RPC algorithms? Manpower more or less identified. Some general approaches established. We need to start performing the actual work. Phase 2: * Mixing Tracker + DT (or other subdetector) info studies moved to Tracking Trigger working group (CMS general, not subdetector based) * New inputs from CMS regarding L1A parameters affect our electronics (this talk and next one)

-ROS -TSC COPPER OPTICAL READOUT TRIGGER C. F. Bedoya April 9th,

4 Many important milestones achieved but also a lot of work ahead us. Since last report (oct 2012): - New CUOF produced (attenuation pad that allowed larger margins for unbalanced data) - December tests at P5: 3 sectors in TRG+RO operated satisfactorily with CUOF+OFCU chain - ESR (Electronics System Review) on Feb 6 th Went well. - February tests at P5: 72 TRG links tested satisfactorily with different trigger rates - Motherboard first boards production: boards in hands, firmware being finalized - First fanouts received: good results - Final order for fibers placed: all RO fibers delivered - Aachen crate prototype received and tested. New backplane prototype expected soon. - Slow control OCB and ECB first tests at 904 ok. - CUOF+OFCU-TRG temperature tests satisfactory results - Half of the Wiener crates ordered (delivery next week?) - Final VME_patch and TIMBUS-OFCU boards under assembly: tests ok - Alignment modules moved from X2 Near - DT-DSS relocation in USC underway - Final order of Pattern Units partly placed

C. F. Bedoya April 9th, Next steps: - Final motherboard tests (missing firmware) - CUOF cooling test (missing motherboard) - CUOF slow control tests at 904 (idem) - Final integration tests in 904 with Minicrate to ROS and TSC chain. - New OCB to be produced (waiting slow control tests). - New backplane from Aachen expected soon - OFCU-TRG slow control tests - Final purchase of OF fanouts - Fibers installation - LV rack refurbishment (waiting for CUOF consumption) - Place final order of Wiener crates - Final orders of CUOF, motherboards, Slow control, OFCU-RO, TSCrear, OFCU-TRG IRR (Installation Readiness Review) 3 months prior installation (oct), i.e. June.... However, there is still one point I don´t think we have yet totally solved : Would we go ahead with full trigger relocation as it is? Deciding on some pre-production tests. Partially done at Torino with doubts... what else?? However, there is still one point I don´t think we have yet totally solved : Would we go ahead with full trigger relocation as it is? Deciding on some pre-production tests. Partially done at Torino with doubts... what else??

DT LS1 upgrade project milestones 2012 June -- new TRBs and CuOF prototypes tested for radiation tolerance Jun June -- DT system Electronics Change Review 2012 Nov -- Installation plan review/ milestones reassessment 2013 March -- Fiber trunk cables ready for installation 2013 May -- new TRB theta ready for installation 2013 July -- CuOF-OFCu system ready for installation in one test wheel (YB-1?) 2013 Oct – Complete CuOF-OFCu system ready for installation in five wheels

C. F. Bedoya April 9th, Interim report from the TPSWG (Trigger Performance Strategy Working Group): bin/DocDB/RetrieveFile?docid=11688&version=1&filename=TPS-Interim-Report-v20.pdf TPSWG charges: to establish a plan for CMS triggering options during Phase 2. Even if you think you will be retired by 2022, think of what you want to do until then... (R&D takes years)

C. F. Bedoya April 9th, TPSWG charges: to establish a plan for CMS triggering options during Phase 2. L1 Tracking trigger Bandwidth increase (L1A latency 20 us and rate 1 MHz) * More latency is desirable for implementing tracking trigger algorithms * Rate increase of hadron (and photon) triggers will not be reduced with tracking trigger (need more than 100 kHz) ROBs may be able to stand 20 us latency, but 1 MHz trigger rate is basically impossible. It could be possible in a fraction of them, mostly if we run the TDCs in continuous mode. What do we do? 1 MHz & 20 us => 1-2 years & 5 MCHF ? Probably more... Evaluate Studies so far compiled in this document, you are welcomed to review it: AND

We are being asked this O ther intermediate points that alleviate the cost and time of needed intervention should be given (if they exist) C. F. Bedoya April 9th, 2013

C. F. Bedoya December 11th, Higher LHC luminosity will imply larger occupancies in the DT chambers, which will turn into a higher hit frequency. The consequences of this higher occupancy are the following: -increase buffers occupancy at the different levels -slow down the time to process an event (reduce processing speed) -increase the required bandwidth for the output link Increase of trigger rate and/or trigger latency only makes this scenario even harder. * L1A rate and latency will impact only readout chain * ROS will be redesigned between LS1 and LS2. * Will try to incorporate DDU functionality to remove further bottlenecks * i.e. impact is on the ROB

11 Big constraint: large time needed to perform any intervention in our detector. Very unlikely that we get access time for all of our needs: LS1.5?: new pixel LS2: HCAL, GEMs?, GRPC? LS3: New Tracker, New ECAL FE Whatever we invent, will likely need to coexist with present electronics but at the same time, the architecture could be completely different if 20 us latency is granted... C. F. Bedoya April 9th, 2013

12 PHASE 2 We need to come with a general plan for DT by the CMS Upgrade week in June (DESY) Running until fb -1 by 2030 Inst Lumi of 7-8 e34 After LS3 (2030) Level1 trigger 1 MHz rate and 20 us latency ? Will chambers survive that? Do we want to make any special study on them? Maybe not the full chambers, but some items? PADCs? FEBs? Minicrates? In reality probably the power supplies and the cooling are the point of major concern for the long reliability of the system... (though not much design we can do on that?...)

C. F. Bedoya April 9th,

C. F. Bedoya April 9th, We can digitize every hit, assign it a time stamp (related to BC0?) and send it outside the Minicrate to USC With the time digital word, one should be able to recreate the trigger primitives Each time measurement will contain: tBX + tdrift + offsets...

UXC USC C. F. Bedoya April 9th, This hits will be stored in memories until a L1A arrives and an event matching is performed (such as what ROB does).

Minicrate mechanics, integration and interfaces: what is easier: replace full Aluminum structure or inner parts? How long it takes to perform the replacement? How can we install new fibers from MC to racks? Power dissipation Electronics design, time digitization: R&D of time digitization based in FPGAs (ACTEL not feasible 350 MHz max, Xilinx?) New techniques for scrubbing, etc and other rad tolerance actions (reloading firmware with TTC comands, etc) Delay lines or DESER approach? Power consumption How many channels per FPGA? (cost) Trigger algorithms and offsite electronics: Starting with time digital information: extract correct bunch crossing, obtain trigger primitives (meantimer? others?) Implement algorithms in FPGAs and interconnect with present trigger electronics READOUT: Perform event matching, etc Which is the latency required? can we survive with a mixed scheme in which we replace only a fraction of the minicrates? C. F. Bedoya April 9th,

C. F. Bedoya April 9th, With our time resolution requirement, we should focus in FPGA implementations

* Chambers resolution aprox 4 ns * HPTDC working in ns time bin (Actually signal integrity degrades this resolution to aprox 1 ns in some channels) * FPGA: There are basically two approaches to reach this resolutions: Deserializer option (+ coarse counter) Delay lines (+ coarse counter) C. F. Bedoya April 9th, * Current consumption should also be a factor to take into account for the design, we could be talking of few Amps/board very easily (Critical point in my opinion) * Radiation tolerance is the other constrain

Using FPGA gates as delay elements. High resolution can be achieved (picoseconds) Dependencies on placement (and thus firmware version), on temperature, on power supply... C. F. Bedoya April 9th,

* Deserializer option: encoding hit as a input serial stream. It can be done with high speed deserializers (GTX) or with lower speed ones, which depend on clock speed) C. F. Bedoya April 9th, serdes coarse counter encoder hit coarse clock deser time digital word coarse time fine time high speed clock * GTX are for 11 Gbps and more * Number of GTX/FPGA is somehow limited, but with SERDES one can think of achieving large number of channels per chip 128? one ROB?) * 1 ns => 1 GHz clock speed (this is well within today´s reach)

Baseline approach: 128 channels /FPGA (one FPGA== one ROB) => 6-7 FPGAs/Minicrate => 2 -3 boards max/Minicrate? => probably one or two MTP optical fiber cords per Minicrate (24 links) At 10 times more luminosity, required output bandwidth could be 200 Mbps/ROB (30 kHz non correlated hits, 114 kHz tracks/ROB) 11.6 Gbps output link allows to transmit basically 9 time measurements/BX => no latency impact C. F. Bedoya April 9th,

C. F. Bedoya April 9th, size of this connectors....

C. F. Bedoya April 9th, How to obtain the trigger primitives with BC0 related digital time measurements? to be continued

24 C. F. Bedoya April 9th, 2013

25

C. F. Bedoya December 11th, TSWG (Trigger Strategy Working Group) CMS is discussing the possibility of operating in Phase 2 with: - 1 MHz L1A rate (instead of 100 kHz) - 20 us latency (instead of 6 us as previously assumed. Now 3.2 us) We are asked to provide statements about the implications in our subsystem. Two meetings already took place: It appeared unlikely because it meant replace all ECAL electronics, but now this is not considered so difficult. (Would this mean that tracking trigger is not so mandatory if we can increase the trigger rate to 1 Mhz? not clear)

C. F. Bedoya December 11th, Higher LHC luminosity will imply larger occupancies in the DT chambers, which will turn into a higher hit frequency. The consequences of this higher occupancy are the following: -increase buffers occupancy at the different levels -slow down the time to process an event (reduce processing speed) -increase the required bandwidth for the output link Increase of trigger rate and/or trigger latency only makes this scenario even harder. * L1A rate and latency will impact only readout chain * ROS will be redesigned between LS1 and LS2. * Will try to incorporate DDU functionality to remove further bottlenecks * i.e. impact is on the ROB

C. F. Bedoya December 11th, Rate outside the trains, no muons included (basically neutrons) Big difference between chambers,: - YB-2 S4 MB4 27 kHz/TDC channel - YB-/+2 MB1s 12 kHz/TDC channel Accounting inside trains may increase 3.6 kHz in MB1 and 0.3 kHz in MB4 (¿?) From Gianni CMS week June 10 35

C. F. Bedoya December 11th, Gianni CMS week June * Upper sectors MB4 * Leaks between barrel and endcap in MB1s

C. F. Bedoya December 11th, From I. Redondo YB-/+2 MB1s at : 350 kHz “muon” rate (per chamber) 114 kHz “muon” rate per ROB 29 kHz “muon” rate per TDC This are chamber rates (DTTF input counters) Phi rates

C. F. Bedoya December 11th, From I. Redondo eta

C. F. Bedoya December 11th, Hit rate 27 kHz => 37 us between hits. Since there are: 128 channels/ROB, => one hit per ROB each 0.29 us (3.5 hits/ROB/event) 32 channels/TDC, => one hit per TDC each 1.2 us. Including muons, adds 8 hits per L1buffer FIFO (or more): Worst muon rate 350 kHz => 29kHz/TDC => 1 muon every 35 us/TDC Hits stored in L1buffer until the L1A arrives. So if the L1A latency was 20 us, this means we will have to store max 1 hit/channel, i.e. 32 hits/TDC until the L1A arrive + 8 hits/TDC from muons until the L1A arrive This represents 12 hits per L1buffer FIFO/event, while they are 256 positions in the FIFO. It seems it should be possible to work with 20 us latency in phase 2

C. F. Bedoya December 11th, Readout FIFO -Event processing speed -Output link bandwidth Time window is approximately 1 us, and we will have 1 hit/TDC each 1.2 us, that is the number of hits per event that are expected to be stored in the Readout FIFO But the limiting factor is the speed at which you output the data byte-wise mode at 20 MHz Header,trailer, hits = 32 bit words It takes 8 BXs to send each hit (minimum) + 16 BXs header+ trailer+ 8 BXs token sharing + seems there is more... (next slide)

C. F. Bedoya December 11th, No muons <- 3.5 hits/ROB Max 500 kHz... probably optimistic If one includes the worst case muon rate (350 kHz/chamber => 1 muon/ROB every 9 us) => MAX 300 kHz (one could reduce the time window, increase the threshold or start being inefficient...) Test perform on a ROB at lab with fixed trigger rate:

C. F. Bedoya December 11th, trigger matching: reduces the payload. (100 kHz L1A rate => one L1A each 10 us with a sampling window of 1 us => we read 10% of the time) At 1 MHz L1A rate => we will be reading 100% of the data, thus trigger matching is an overhead of processing time and bandwidth (headers and trailers) In principle, it is possible to run the HPTDCs in continuous mode: - hits sent as arrive from chambers, - time measurement referred to BC0 Latency and L1A rate becomes irrelevant for the ROB Trigger matching in the new ROS may be possible (needs study and will imply detailed calibration of the different absolute BXs ID between the ROBs and the ROS). It may open possibility to use TDC data in the trigger (?)... (output delay not deterministic and long...) Limiting factor in this mode of operation is the bandwidth of the readout link: 27 kHz * 128 ch * 32 bits/hit = 110 Mbps With muons (350 kHz/chamber => 114 kHz/ROB of 8 hits muons => 114 Mbps Max bandwidth is 160 Mbps => hit rate 39 Hz/cm 2 (to be checked) MB4s are close... This may work depending on the uncertainty of the background. If it is large, then we start o be inefficient

C. F. Bedoya December 11th, We don´t expect any problem for running present ROBs design with Phase 2 occupancies and 100 kHz L1A rate and 6 us L1A latency With the background rates expected in Phase 2, it is unlikely that we could run at a trigger rate higher than 500 kHz. Latencies of 20 us could likely be achieved, although proper testing should be done. HPTDCs could be operated in continuous mode, which should allow operation with 20 us latency and 1 MHz trigger rate. However, this means no background reduction is possible playing with the time window (no filtering is done, all hits are sent). Therefore, we could start to loose efficiency for background rates larger than 39 Hz/cm 2. (This number should also be checked). MB4 rates are too close to this number (27 Hz/cm 2 for YB-2 S4 MB4). Any action to place a shielding in the MB4s of the upper sectors or in the MB1s of the external wheels will reduce any possible efficiency drop. Considering the large uncertainties (energy, extrapolation, target luminosity, etc), even if the MB4s can be shielded, replacing the ROBs of the MB1s external wheels with a higher performance board, should also not be discarded In order to operate in continuous mode, a DC balancing is mandatory at CUOF level.

C. F. Bedoya April 9th,

Width of the board is fixed by the Minicrate width (aprox 9.5 cm) Length is variable, limitation are the input connectors which are 5.5 cm long Number of boards Otherwise, interface board with high density connectors This will allow higher density integration (up to 1 board/Minicrate? do we want this?)

Simulations of the impact of inefficiency in the detector in the physics analysis do not exist. A muon joint effort would make sense on this. Probably even less if one refers to simulation of the electronics (readout), and this is one of the main points we are asked.... In reality probably the power supplies and the cooling are the point of major concern for the long reliability of the system... (though not much design we can do on that?...) But for custom electronics we need time for brainstorming and for putting together the manpower and the budget (different timescales for different institutes, etc), so discussions on what can be done should take place now. Different L1A parameters may imply some challenge in the electronics.. it may be more interesting that to rebuilding the same wheel... Or maybe what we want to think is in replacing subdetectors technology.. but I doubt that. C. F. Bedoya April 9th,

(or 25 us...) maybe not so much Pixel trigger? AND C. F. Bedoya April 9th, 2013

RPC: Probably changed by LS2. All in USC. Any increase L1A param => keuros (84 RMBs, 3 FEDs) CSC:Good at increased L1A rate, bad at increased latency >300 kHz & 6.4 us => 4 months & 12 MCHF. 500 kHz possible at all? which implications Is it worth changing just a fraction of the CFEBs? Do you expect any longevity issue that would make worth a re-desing independently from this? 1 MHz & 20 us => 4 months & 12 MCHF ? Is that true? DT: Opposite to CSCs... latency should be OK, rate overkilling 300 kHz & 20 us => OK (to check) 500 kHz & 20 us => YB-/+2 MB1 and MB4s? 6 months?, cost? 1 MCHF? Evaluate Possibility open that we need to replace Minicrates either way.. maybe not all in one goal, but the R&D should be launched soon 1 MHz & 20 us => 1-2 years & 5 MCHF ? Probably more... Evaluate ECAL: limit is 150 kHz & 6.4 us (maybe 300 kHz?). > => 26 months shutdown, 10 MCHF Tracker: very much in favor of increasing latency (20 us) if rate increased, happy with 500 kHz, 1 MHz not impossible HCAL: changes limited to USC, does not seem a major constraint at present. ECAL: limit is 150 kHz & 6.4 us (maybe 300 kHz?). > => 26 months shutdown, 10 MCHF Tracker: very much in favor of increasing latency (20 us) if rate increased, happy with 500 kHz, 1 MHz not impossible HCAL: changes limited to USC, does not seem a major constraint at present. C. F. Bedoya April 9th, 2013

43

C. F. Bedoya April 9th,