Introduction HCAL Front-End Readout and Trigger Joint effort between:

Slides:



Advertisements
Similar presentations
MICE Fiber Tracker Electronics AFEII for MICE (Front end readout board) Recall: AFEs mount on ether side of the VLPC cass, with fibers going to the VLPCs.
Advertisements

CMS Week Sept 2002 HCAL Data Concentrator Status Report for RUWG and Calibration WG Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
USCMS/Rochester. Oct 20, 2001HCAL TriDAS1 HCAL TPG and Readout CMS HCal meeting at Rochester Oct Oct 2001 Tullio Grassi, Drew Baden University of.
06-Dec-2004 HCAL TriDAS 1 TriDAS Status HCAL Group.
CMS/CERN. Nov, 2001HCAL TriDAS1 HCAL Fiber Link Issues Use of CMS Tracker Fiber Links Drew Baden University of Maryland John Elias Fermilab.
CMS/ESSC. May, 2002HCAL TriDAS1 HCAL FE/DAQ Overview Shield Wall CPUCPU DAQ RUI HPD FE MODULE DAQ DATA SLINK64 [1 Gbit/s]  18 HTRs per Readout Crate FRONT-END.
06-Dec-2004 HCAL TriDAS 1 TriDAS Status HF Luminosity HF Trigger Slicing.
1 Cost Estimate (and configuration) Drew Baden Univ. Of Maryland July 2001.
Drew Baden, UMD Nov HCAL Trigger/Readout Texas Tech Meeting February 2001 Drew Baden, Tullio Grassi University of Maryland.
CMS/CERN. Nov, 2001HCAL TriDAS1 HCAL TPG and Readout CMS HCAL Readout Status CERN Tullio Grassi, Drew Baden University of Maryland Jim Rohlf Boston University.
Drew Baden, UMD Nov HCAL Trigger/Readout CMS/Tridas November 2000 Drew Baden, Tullio Grassi University of Maryland.
HCAL FIT 2002 HCAL Data Concentrator Status Report Gueorgui Antchev, Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
CMS Electronics Week November Electronics Issues Drew Baden University of Maryland USCMS HCAL.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
Technical Part Laura Sartori. - System Overview - Hardware Configuration : description of the main tasks - L2 Decision CPU: algorithm timing analysis.
HBD FEM the block diagram preamp – FEM cable Status Stuffs need to be decided….
June 29th 2005Ph. BUSSON LLR Ecole polytechnique, Palaiseau1 Plans for a new TCC for the endcaps Characteristics: reminder Preliminary list of tasks and.
Leo Greiner IPHC DAQ Readout for the PIXEL detector for the Heavy Flavor Tracker upgrade at STAR.
Status of Global Trigger Global Muon Trigger Sept 2001 Vienna CMS-group presented by A.Taurok.
CSC EMU Muon Sorter (MS) Status Plans M.Matveev Rice University August 27, 2004.
CPT Week, April 2001Darin Acosta1 Status of the Next Generation CSC Track-Finder D.Acosta University of Florida.
Claudia-Elisabeth Wulz Anton Taurok Institute for High Energy Physics, Vienna Trigger Internal Review CERN, 6 Nov GLOBAL TRIGGER.
1 VeLo L1 Read Out Guido Haefeli VeLo Comprehensive Review 27/28 January 2003.
Status and planning of the CMX Wojtek Fedorko for the MSU group TDAQ Week, CERN April , 2012.
Leo Greiner PIXEL Hardware meeting HFT PIXEL detector LVDS Data Path Testing.
LHCb front-end electronics and its interface to the DAQ.
13-Feb-2004 HCAL TriDAS 1 HCAL Tridas SLHC Drew Baden University of Maryland February 2004.
11 October 2002Paul Dauncey - CDR Introduction1 CDR Introduction and Overview Paul Dauncey Imperial College London.
Feb 2002 HTR Status CMS HCal meeting at FIT Feb. 7-9, 2002 Tullio Grassi University of Maryland.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
25 Sept 2001Tullio Grassi From MAPLD2001 ( space community conference ) Major causes of failures.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
E. Hazen1 MicroTCA for HCAL and CMS Review / Status E. Hazen - Boston University for the CMS Collaboration.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
10/28/09E. Hazen FNAL Workshop1 HCAL DAQ / Timing / Controls Module Eric Hazen, Shouxiang Wu Boston University.
E. Hazen1 New DCC Status and Plans Jim Rohlf, Eric Hazen, Arno Heister, Phil Lawson, Jason St John, Shouxiang Wu Boston University.
E. Hazen1 Fermilab CMS Upgrade Workshop November 19-20, 2008 A summary of off-detector calorimeter TriDAS electronics issues Eric Hazen, Boston.
The CMS HCAL Data Concentrator: A Modular, Standards-based Design
HCAL DAQ Path Upgrades Current DCC Status New DCC Hardware Software
14-BIT Custom ADC Board Rev. B
Production Firmware - status Components TOTFED - status
CCS Hardware Test and Commissioning Plan
The CMS HCAL Data Concentrator: A Modular, Standards-based Design
AFE II Status First board under test!!.
A New Clock Distribution/Topology Processor Module for KOTO (CDT)
HCAL Data Concentrator Production Status
L0 processor for NA62 Marian Krivda 1) , Cristina Lazzeroni 1) , Roman Lietava 1)2) 1) University of Birmingham, UK 2) Comenius University, Bratislava,
The CMS HCAL Data Concentrator: A Modular, Standards-based Design
CMS EMU TRIGGER ELECTRONICS
ATLAS L1Calo Phase2 Upgrade
CMS SLHC Calorimeter Trigger Upgrade,
ECAL OD Electronic Workshop 7-8/04/2005
Calorimeter Trigger Synchronization in CMS,
DAQ Interface for uTCA E. Hazen - Boston University
Geneve The Alps Where is CERN? Jura Mountains Lake Geneva 18-Sep-18
HCAL Configuration Issues
Synchronization Policy: HCAL will follow ECAL as much as possible
VELO readout On detector electronics Off detector electronics to DAQ
LHCb calorimeter main features
8-layer PC Board, 2 Ball-Grid Array FPGA’s, 718 Components/Board
USCMS HCAL FERU: Front End Readout Unit
Lehman 2000 Project Timeline
The CMS Tracking Readout and Front End Driver Testing
PID meeting Mechanical implementation Electronics architecture
sPHENIX DOE-SC CD-1/3a Review WBS 1.5.3: CalElec Digitizers
Multi Chip Module (MCM) The ALICE Silicon Pixel Detector (SPD)
Data Concentrator Card and Test System for the CMS ECAL Readout
TELL1 A common data acquisition board for LHCb
FED Design and EMU-to-DAQ Test
HCAL DAQ Interface Eric Hazen Jim Rohlf Shouxiang Wu Boston University
Presentation transcript:

Introduction HCAL Front-End Readout and Trigger Joint effort between: University of Maryland – Drew Baden (Level 3 Manager) Boston University – Eric Hazen, Larry Sulak University of Illinois, Chicago – Mark Adams, Rob Martin, Nikos Varelas Web site: http://macdrew.physics.umd.edu/cms/ Has latest project file, spreadsheets, conceptual design report, etc… This talk: Design Schedule Issues TRIDAS Cern. 18-Sep-18

CMS Inner Detector TRIDAS Cern. 18-Sep-18

HCAL Barrel Trigger Primitives Dh x Df = .087 x .087 H0 and H1 depths Trigger Primitive: H0+H1 added together TT  physical tower Convert to ET via LUT Muon bit added 1.5 GeV < E < 2.5 GeV 1 MIP ~ 2 GeV Level 1 data: 24 bits: Tower A: 9 bits: 8 bits ET nonlinear 1 bit Muon Tower B: 9 bits, same as A 1 Abort gap bit 5 bits Hamming code TRIDAS Cern. 18-Sep-18

HCAL Endcap Trigger Primitives Dh x Df = .087 x 5° (.087) h< 1.6 Dh x Df = .087 x 10 ° (.175) h>1.6 Trigger Primitive: h< 1.6 same as Barrel h> 1.6 1 TT = 2 physical towers break physical tower into halves and send Level 1 data: TRIDAS Cern. 18-Sep-18

HCAL Forward Trigger Primitives Dh x Df = .166 x 10° (.175) Trigger Primitive: combine 2 in f and 3 in h to make trigger tower Level 1 data: same as Barrel TRIDAS Cern. 18-Sep-18

HCAL TriDAS Relevant Parameters These parameters drive the design and project cost: HCAL Channels and topology determines Total channel count Total number of trigger towers (includes calibration fibers) Need to understand 53° Overlap region, high rad region, fiber cabling, etc. QIE Readout parameters determine Number of channels/card 2 Channels per fiber, 7 data bits + 2 cap bits per channel HP G-links, 20 bit frames, 16 bits data Total number of cards to build Data rates Assume 100kHz Level 1 accepts Occupancy estimated to be 15% at L=1034 Determines buffer sizes Level 1 trigger crossing determination Under study (Eno, Kunori, etal.) Determines FPGA complexity and size (gates and I/O) Region Towers Fibers Trigger Towers Barrel 4,968 2,304 Outer 2,280 1,140 Endcap 3,672 1,836 1,728 Forward 2,412 1,206 216 Total 13,332 6,486 4,248 TRIDAS Cern. 18-Sep-18

Readout Crate Components HCAL Readout Crate 9U VIPA No P3 backplanes and aux cards (saves $) Receiver and pipeline cards: 18 HCAL Trigger and Readout (HTR) Cards per crate Raw data fiber input 16 fibers, 2 channels per fiber=32 channels per card Level 1 Trigger Primitives output Level 2 Data output to DCC (next bullet) Data Concentrator: 1 DCC card (Data Concentrator Card) Receives and buffers L1 accepts for output to L2/DAQ Crate controller: 1 HRC card (HCAL Readout Control Module) VME CPU for slow monitoring FPGA logic for fast monitoring TTC LVDS fanout ala ECAL Front End: QIE Tx DAQ 2 channels/fiber @ 1 Gbps D C H R C H T R H T R H T R H T R 18 HTR Cards per VME crate D C T C Rx Level 2 Raw Data 200 Mbps link T C Ex Level 1 Trigger Data Vitesse, Copper Link, 20m cables, 1 Gbps Level 1 Trigger TRIDAS Cern. 18-Sep-18

HCAL TRIGGER and READOUT Card All I/O on front panel Level 1 Trigger Tower data outputs: 8 shielded twisted pair 4 per single 9-pin D connector TTC input: From TTCrx on HDC, ala ECAL Raw data Inputs: 16 digital serial fibers from QIE 1 serial twisted pair with TTC information DAQ data output to DCC: Single connector running LVDS FPGA logic implements: Level 1 Path: Trigger primitive preparation Transmission to Level 1 Level 2/DAQ Path: Buffering for Level 1 Decision No filtering or crossing determination necessary Transmission to DCC for Level 2/DAQ readout Point-to-Point Tx to DCC LVDS TTL input from TTC Level 2 Path (Pipeline and Level 2 buffers) P2 P0 P1 HP G-Links Fiber Rx Level 1 Path (Trigger Primitives ) Vitesse Output to Level 1 Trigger Framework TRIDAS Cern. 18-Sep-18

HTR Card Conceptual Design Current R&D design focusing on Understanding HCAL requirements Interface with FNAL group No card-to-card communication Has implications with respect to summing in 53° region, and fiber routing How to implement I/O R&D on all relevant links is in progress Minimizing internal data movement FPGA algorithm implementation Requires firm understanding of requirements Requires professional-level FPGA coding Reducing costs FPGA implementation Altera vs. Xilinx On-chip FPGA memory is a big factor Eliminating P3 backplane saves ~$1000/card and $1500/crate Maintain openness to industry advances “Watch” HDTV TRIDAS Cern. 18-Sep-18

HTR Conceptual Schematic Input: QIE 7 bit floating plus 2 bits “cap” Lookup table (LUT) Convert to 16 bit linear energy Pipeline (“Level 1 Path”) Transmit to Level 1 trigger, buffer for Level 1 Accept, 3 ms latency Level 2 Buffer (“Level 2 Path”) Asynchronous buffer, sized based on physics requirements BC PLD Ch1A address LUT Ch1A data Level 1 Path (Trigger Primitives) 7 16 Level 1 7/9 Ch1A data Rx 1 Frame Bit 20 bit frames 40 Mframe/sec 2 1 1 “Cap” Bits “Cap” Error Bit Ch1B data Level 2 Path (Pipeline) PLD LUT Ch1B address Ch1B data DCC 7 16 7/9 BC TRIDAS Cern. 18-Sep-18

Level 1 Path 40 MHz pipeline maintained Trigger Primitives 8 bit “floating point” E to 16 bit linear ET Combination of HCAL towers into TRIGGER Towers Muon MIP window applied, feature bit presented to Level 1 Relevant TTC signals via TTCRx chip Synchronization and bunch crossing ID Level 1 Filtering using simple algorithm and BCID and Monte Carlo guides TRIDAS Cern. 18-Sep-18

Level 2/DAQ Path 40 MHz pipeline into RAM, wait for L1 decision 16+2 deRandomizing buffers Level 2 "Filter" (ASIC?) 16 Control L2 Format (FPGA) Consists of16-bit words: Header: Event ID (1) Beam crossing/"data state" (1) Data: Data buckets (N ≤ 10) SUM from filter (1) Address/error (1) "L2 FIFO" 2 bits of "data state" added to header Tx to DCC Ch data Ch error 2 SUM "Full" Level 1 Accepts 32 channels total L1A decoder Framework Address pointer "N" data buckets 12 Beam Crossings Crossing 10 buckets 40 MHz pipeline into RAM, wait for L1 decision 18 bits wide X 5msec buffer = 3.6k bits/channel 32 channels per card, 400kbits (see next slide) Level 1 Accept causes time slices clocked into derandomizing buffers Assume 10 time buckets per crossing is necessary for readout Apply HCAL proposed trigger rule: no more than 22 L1A per orbit Deadtime < 0.1% given 100 kHz L1A see “next” slide for more on this... TRIDAS Cern. 18-Sep-18

HTR Cost and Function Analysis HTR constitutes 60% of production M&S Cost estimates are difficult and very fluid due to uncertainty in requirements. Driven by memory requirement: 32 channels, 7 bits of raw data + 2 bits cap = 9 bits per channel LUT produces 16 bits nonlinear, therefore needs 29=512 locations x 16 bits = 8192 bits per channel Pipeline: 5 ms latency (3 ms officially) requires 200 cells at 40 MHz Store raw 9bits in pipeline, reduces memory requirement: 200x9=1800 bits per channel Accept buffer (aka “derandomizer”) How deep? Apply L1 Accept rule: < 15 L1A’s per orbit 10 time buckets x 16 bits x 15 deep = 2.4k bits Total bits/channel: 8192 + 1800 + 2400 = 12392 32 channels means we need 400k bits And that means we use the LUT in both L1 and L2 paths (ok, only 40 MHz pipeline, easy for FPGA) There are some “tricks” but all have engineering risk and cost, e.g. Do some integer arithmetic inside FPGA, save 2 cap bits, decrease memory by x4 Go off FPGA chip for memory access TRIDAS Cern. 18-Sep-18

HCAL L1A Limit Rule L1A average rate = 100 kHz = 1/10ms 25 ns per crossing, means on average 400 crossings between L1A’s Orbit period = 88 ms or approximately 3600 crossings Therefore on average, 9.46  0.36 L1A per orbit We will require < 15 L1A per orbit for HTR L2/DAQ buffers TRIDAS Cern. 18-Sep-18

HTR Costs FPGA: Overall FPGA requirement: 400 kb memory, 500 I/O pins Need 400k bits onboard memory Need 32 channels x 9 pins input = 282 pins input Need some VME + LVDS + Vitesse => another 100 pins at least Overall FPGA requirement: 400 kb memory, 500 I/O pins Latest quotes (as of 4/6/00, after baseline was established): TRIDAS Cern. 18-Sep-18

Data Concentrator Card Motherboard/daughterboard design: Build motherboard to accommodate PCI interfaces (to PMC and PC-MIP) VME interface PC-MIP cards for data input 3 LVDS inputs per card 6 cards per DCC (= 18 inputs) Engineering R&D courtesy of D 2 PMC cards for Buffering: Transmission to L2/DAQ via Link Tx Data from 18 HTR cards 6 PC-MIP mezzanine cards - 3 LVDS Rx per card PMC Logic Board Interface with DAQ To DAQ Buffers 1000 Events PCI Interfaces P2 P0 P1 TRIDAS Cern. 18-Sep-18

Hcal Readout Control Module Motherboard/daughterboard design: Functionality: Slower Monitoring and crate communication PMC CPU, buy from industry Input from DCC To be determined Monitoring and VME data path TTC ala ECAL/ROSE LVDS Fanout to all 18 HTR cards Monitoring: (FMU) FPGA implemented on PMC daughter board Busy signal output (not shown) To go to HCAL DAQ Concentrator (HDC) TTC Fanout to 18 HTR cards PMC Logic Board PCI Interfaces P2 P0 P1 From DCC TTC in PMC CPU Board Fast Monitoring Unit TTC fanout TRIDAS Cern. 18-Sep-18

Overall Project Timeline 2000 2001 2002 2003 2004 Demonstrator Prototype Pre-production Prototype Final Cards Installation Limited scope 1st priority: functional test of requirements 2nd priority: hardware implementation Integration: Fall 2000 Hardware implementation is a priority Full channel count Integration: Fall 2001 Production cloned from these Integration effort not necessary Completed: Summer 2002 Begins: Fall 2002 Duration 4-6 months Completed: Spr 2003 Begins: Fall 2004 Contingency Alcove Tests TRIDAS Cern. 18-Sep-18

(Appendix) TRIDAS Cern. 18-Sep-18

Project Status HTR project progress: DCC HRC Integration R&D on I/O proceeding: LVDS, Copper Gbit/sec (Vitesse), Fiber Gbit/sec (HP) I/O board and HP G-link 6U front end emulator boards almost done Evaluation of FPGA choices (no ASIC ala “Fermi”) Better understanding of buffer sizes, synchronization, etc. DCC R&D on DCC motherboard underway at BU PC-MIP R&D underway, paid for from D project HRC UIC ramping up to provide some functionality of HRC card by fall 2000 Integration Synchronization issues under study (we are being educated) HCAL will closely follow ECAL unless absolutely necessary Mr. daSilva is helping us a great deal! Plans to use early production cards for Alcove tests in early 2003 Ready for TRIDAS integration: Ready by beginning of ‘03 Preliminary integration tests possible after 1st prototype, somewhere in ‘02 maybe TRIDAS Cern. 18-Sep-18

Current Issues Parts cost HTR implementation in 4 FPGA in 463 cards = 1852 FPGAs Trying to pin down Xilinx and Altera Still learning about functional requirements, e.g. how to design with uncertain crossing determination algorithm in HTR How DDC interfaces with DAQ Synchronization FPGA programming Need to do this at the behavioral level This is our biggest engineering concern, and we are doing the following: Gain expertise in-house: Verilog courses for Baden, students, engineers, etc. Request help from Altera and Xilinx with FPGA code design Mining expertise within the Masters level EE students Tullio Grassi just hired (former Alice pixel engineer) Confident that we will be ok on this.. TRIDAS Cern. 18-Sep-18