ECFA04 - 1 Sep 2004Paul Dauncey - ECAL DAQ1 Thoughts on Si-W ECAL DAQ Paul Dauncey For the Imperial/UCL groups D. Bowerman, J. Butterworth, P. Dauncey,

Slides:



Advertisements
Similar presentations
6 Mar 2002Readout electronics1 Back to the drawing board Paul Dauncey Imperial College Outline: Real system New VFE chip A simple system Some questions.
Advertisements

LAV contribution to the NA62 trigger Mauro Raggi, LNF ONLINE WG CERN 9/2/2011.
Adding electronic noise and pedestals to the CALICE simulation LCWS 19 – 23 rd April Catherine Fry (working with D Bowerman) Imperial College London.
March, 11, 2006 LCWS06, Bangalore, India Very Forward Calorimeters readout and machine interface Wojciech Wierba Institute of Nuclear Physics Polish Academy.
24 September 2002Paul Dauncey1 Trigger issues for the CALICE beam test Paul Dauncey Imperial College London, UK.
28 February 2003Paul Dauncey - HCAL Readout1 HCAL Readout and DAQ using the ECAL Readout Boards Paul Dauncey Imperial College London, UK.
Victoria04 R. Frey1 Silicon/Tungsten ECal Status and Progress Ray Frey University of Oregon Victoria ALCPG Workshop July 29, 2004 Overview Current R&D.
29 June 2004Paul Dauncey1 ECAL Readout Tests Paul Dauncey For the CALICE-UK electronics group A. Baird, D. Bowerman, P. Dauncey, C. Fry, R. Halsall, M.
28 Feb 2006Digi - Paul Dauncey1 In principle change from simulation output to “raw” information equivalent to that seen in real data Not “reconstruction”,
20 Feb 2002Readout electronics1 Status of the readout design Paul Dauncey Imperial College Outline: Basic concept Features of proposal VFE interface issues.
10 Nov 2004Paul Dauncey1 MAPS for an ILC Si-W ECAL Paul Dauncey Imperial College London.
28 August 2002Paul Dauncey1 Readout electronics for the CALICE ECAL and tile HCAL Paul Dauncey Imperial College, University of London, UK For the CALICE-UK.
5 Feb 2002Electronics changes1 Possible changes to electronics Paul Dauncey Imperial College Some ideas on iterations to the design: Reduce number of uplinks?
6 June 2002UK/HCAL common issues1 Paul Dauncey Imperial College Outline: UK commitments Trigger issues DAQ issues Readout electronics issues Many more.
David Bailey Manchester. Summary of Current Activities UK Involvement DAQ for SiW ECAL (and beyond) “Generic” solution using fast serial links STFC (CALICE-UK)
The first testing of the CERC and PCB Version II with cosmic rays Catherine Fry Imperial College London CALICE Meeting, CERN 28 th – 29 th June 2004 Prototype.
28 June 2002Santa Cruz LC Retreat M. Breidenbach1 SD – Silicon Detector EM Calorimetry.
4 Dec 2001First ideas for readout/DAQ1 Paul Dauncey Imperial College Contributions from all of UK: result of brainstorming meeting in Birmingham on 13.
27 th May 2004Daniel Bowerman1 Dan Bowerman Imperial College 27 th May 2004 Status of the Calice Electromagnetic Calorimeter.
LCWS2002 R. Frey1 Silicon/Tungsten ECal for the SD Detector M. Breidenbach, D. Freytag, G. Haller, M. Huffer, J.J Russell Stanford Linear Accelerator Center.
LCWS Apr 2004Paul Dauncey - CALICE Readout1 CALICE ECAL Readout Status Paul Dauncey For the CALICE-UK electronics group: A. Baird, D. Bowerman,
29 January 2004Paul Dauncey - CALICE DAQ1 UK ECAL Hardware Status David Ward (for Paul Dauncey)
2 April 2003Paul Dauncey - CALICE DAQ1 First Ideas For CALICE Beam Test DAQ Paul Dauncey Imperial College London, UK for IC, Manchester, RAL, UCL.
1 Sept 2005Paul Dauncey1 MAPS award and issues Paul Dauncey Imperial College London.
Electronics for PS and LHC transformers Grzegorz Kasprowicz Supervisor: David Belohrad AB-BDI-PI Technical student report.
06/03/06Calice TB preparation1 HCAL test beam monitoring - online plots & fast analysis - - what do we want to monitor - how do we want to store & communicate.
MR (7/7/05) T2K electronics Beam structure ~ 8 (9?) bunches / spill bunch width ~ 60 nsec bunch separation ~ 600 nsec spill duration ~ 5  sec Time between.
31 October 01Plans for ECAL Involvement Plans for UK involvement in ECAL (CALICE) Paul Dauncey Imperial College z Status of (non-UK) ECAL work z What is.
15 Dec 2010 CERN Sept 2010 beam test: Sensor response study Chris Walmsley and Sam Leveridge (presented by Paul Dauncey) 1Paul Dauncey.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
7 Nov 2007Paul Dauncey1 Test results from Imperial Basic tests Source tests Firmware status Jamie Ballin, Paul Dauncey, Anne-Marie Magnan, Matt Noy Imperial.
A.A. Grillo SCIPP-UCSC ATLAS 10-Nov Thoughts on Data Transmission US-ATLAS Upgrade R&D Meeting UCSC 10-Nov-2005 A. A. Grillo SCIPP – UCSC.
18 Jan 2006DAQ - Paul Dauncey1 DAQ/Online: readiness for DESY and CERN beam tests Paul Dauncey Imperial College London.
SiW ECAL Technological Prototype Test beam results Thibault Frisson (LAL, Orsay) on behalf of the CALICE collaboration.
11th March 2008AIDA FEE Report1 AIDA Front end electronics Report February 2008.
ECFA Sep 2004Paul Dauncey - CALICE Readout1 CALICE ECAL Readout Status Paul Dauncey For the CALICE-UK electronics group A. Baird, D. Bowerman, P.
11 Sep 2009Paul Dauncey1 TPAC test beam analysis tasks Paul Dauncey.
16 Sep 2009Paul Dauncey1 DECAL beam test at CERN Paul Dauncey for the CALICE-UK/SPiDeR groups: Birmingham, Bristol, Imperial, Oxford, RAL.
26 Apr 2009Paul Dauncey1 Digital ECAL: Lecture 1 Paul Dauncey Imperial College London.
EUDET JRA3 ECAL and FEE C. de La Taille (LAL-Orsay) EUDET status meeting DESY 10 sep 2006.
DAQ for 4-th DC S.Popescu. Introduction We have to define DAQ chapter of the DOD for the following detectors –Vertex detector –TPC –Calorimeter –Muon.
24 Mar 2009Paul Dauncey1 CALICE-UK: The Final Curtain Paul Dauncey, Imperial College London.
5 February 2003Paul Dauncey - Calice Status1 CALICE Status Paul Dauncey Imperial College London For the CALICE-UK groups: Birmingham, Cambridge, Imperial,
26 Apr 2009Paul Dauncey1 Digital ECAL: Lecture 2 Paul Dauncey Imperial College London.
Update on the project - selected topics - Valeria Bartsch, Martin Postranecky, Matthew Warren, Matthew Wing University College London CALICE a calorimeter.
LCWS Apr 2004Paul Dauncey - CALICE Readout1 CALICE ECAL Readout Status Paul Dauncey For CALICE-UK electronics group: A. Baird, D. Bowerman, P. Dauncey,
Rutherford Appleton Laboratory Particle Physics Department G. Villani CALICE MAPS Siena October th Topical Seminar on Innovative Particle and.
LHCb VELO Upgrade Strip Chip Option: Data Processing Algorithms Giulio Forcolin, Abdul Afandi, Chris Parkes, Tomasz Szumlak* * AGH-Krakow Part I: LCMS.
J-C Brient-DESY meeting -Jan/ The 2 detector options today …. SiD vs TDR [ * ] [ * ] J.Jaros at ALCPG-SLAC04 ECAL ECAL tungsten-silicon both optionsHCAL.
Custom mechanical sensor support (left and below) allows up to six sensors to be stacked at precise positions relative to each other in beam The e+e- international.
11 October 2002Paul Dauncey - CDR Introduction1 CDR Introduction and Overview Paul Dauncey Imperial College London.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
5 May 2006Paul Dauncey1 The ILC, CALICE and the ECAL Paul Dauncey Imperial College London.
18 Sep 2008Paul Dauncey 1 DECAL: Motivation Hence, number of charged particles is an intrinsically better measure than the energy deposited Clearest with.
S.MonteilPS COMMISSIONING1 MaPMT-VFE-FE ELECTRONICS COMMISSIONING AND MONITORING. OUTLINE 1)Ma-PMT TEST BENCHES MEASUREMENTS 2)VFE AND FE ELECTRONICS FEATURES.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
November, 7, 2006 ECFA06, Valencia, Spain LumiCal & BeamCal readout and DAQ for the Very Forward Region Wojciech Wierba Institute of Nuclear Physics Polish.
Front-end Electronic for the CALICE ECAL Physic Prototype Christophe de La Taille Julien Fleury Gisèle Martin-Chassard Front-end Electronic for the CALICE.
Off-Detector Processing for Phase II Track Trigger Ulrich Heintz (Brown University) for U.H., M. Narain (Brown U) M. Johnson, R. Lipton (Fermilab) E. Hazen,
SKIROC status Calice meeting – Kobe – 10/05/2007.
Understanding of SKIROC performance T. Frisson (LAL) On behalf of the SiW ECAL team Special thanks to the electronic and DAQ experts: Stéphane Callier,
A First Look J. Pilcher 12-Mar-2004
Example of DAQ Trigger issues for the SoLID experiment
UK ECAL Hardware Status
Post CALICE thoughts Paul Dauncey 28 Apr 2004 Paul Dauncey.
SKIROC status Calice meeting – Kobe – 10/05/2007.
SKIROC status CERN – CALICE/EUDET electronic & DAQ meeting – 22/03/2007 Presented by Julien Fleury.
Trigger issues for the CALICE beam test
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
The CMS Tracking Readout and Front End Driver Testing
Presentation transcript:

ECFA Sep 2004Paul Dauncey - ECAL DAQ1 Thoughts on Si-W ECAL DAQ Paul Dauncey For the Imperial/UCL groups D. Bowerman, J. Butterworth, P. Dauncey, M. Postranecky, M.Warren, M. Wing, O. Zorba

ECFA Sep 2004Paul Dauncey - ECAL DAQ2 How to get from here to there… Where “there” is ~ 2009 Detector TDR should be written Prototyping of final ECAL system Production of final ECAL system Five years to define what the ECAL looks like Which detectors to use Readout system architecture UK groups beginning to look at DAQ for ECAL Within context of CALICE, so concentrate on Si-W ECAL (at least for now) Try to identify if any R&D “bottlenecks” which need work Order of magnitude values only here

ECFA Sep 2004Paul Dauncey - ECAL DAQ3 Generic Si-W ECAL TESLA ECAL design Si-W sampling calorimeter 8-fold symmetry 40 silicon layers Any LC ECAL will probably be Radius ~2m Length ~5m Thickness ~20cm Sensitive detector of silicon wafers 1  1cm 2 p-n diode pads Circumference ~12m ~1000 pads Length ~5m ~500 pads Number of layers ~30 (?) Barrel total ~1000  500  30 ~ 15M pads Including endcaps, total ~ 20M pads

ECFA Sep 2004Paul Dauncey - ECAL DAQ4 Generic Si-W ECAL (cont) Design likely to be modular Mechanically and electrically Assume this will be true here also TESLA ECAL made from slabs Inserted into alveoli of carbon fibre/tungsten mechanical structure Each slab was two-sided, 16 wafers long, each wafer having 10  10 pads Total number of pads per slab was 3200 Readout all along one (short) edge Generic ECAL; assume modularity will be similar Could be tower of several layers, etc. Each slab ~ 4k pads Total detector ~ 5k slabs

ECFA Sep 2004Paul Dauncey - ECAL DAQ5 Slab electrical structure Now likely there will be ASICs mounted on/near wafers Preamplification and shaping Possibly digitisation and threshold suppression ASIC will handle something like 128 channels ~ 100 Need ~ 40 ASICs per slab, ~ 200k total for ECAL Very restricted space between layers Need small gap to take advantage of tungsten Moliere radius Cooling difficult to get in; must minimise ASIC power consumption Power down electronics between bunch trains? All communication to outside via edge electronics Integration still difficult but assume sufficient cooling possible

ECFA Sep 2004Paul Dauncey - ECAL DAQ6 Assume something like 800 GeV TESLA Parameters were Bunch crossing period within bunch train = 176ns ~ 200ns Number of crossings per bunch train = 4886 ~ 5000 Bunch train length = 860  s ~ 1ms Bunch train period =250ms ~ 200ms Results in front end electronics duty factor ~ 0.5% Also assume ECAL needs to Digitise signal every bunch crossing Readout completely before next bunch train However, a need for cosmics may change this See later…

ECFA Sep 2004Paul Dauncey - ECAL DAQ7 Fast control and timing Beam crossing time is slow ~5MHz Will need some finer adjustment within this period for sample-and-hold, etc. Will need faster clock for FPGAs, ASICs, etc. Assume order of magnitude sufficient, i.e.  8 of clock ~ 40MHz Straightforward to distribute by fibre to slab Count multiples-of-2 on this fast clock to recover beam crossing times Need synchronisation signal to start multiple-of-2 counters Need programmable offset to adjust phase of multiple-of-2 counters Configuration would be done via fibre also Load constants (pedestals, thresholds, etc) Load firmware? This would need to be absolutely failsafe in terms of recovery from errors; the slabs will be inaccessible for years Can receivers on slab be made generic for whole detector? Counters plus configuration data protocol needed by all systems

ECFA Sep 2004Paul Dauncey - ECAL DAQ8 Data volumes and rates Each silicon pad needs ADC conversion Basic measurement is number of particles which went through pad Usual measure is in units of Minimum Ionising Particle (MIP) deposit Highest density expected in EM shower; up to 100 particles/mm 2 In 1  1cm 2 pad, expect up to 10k MIPs; sets dynamic range needed = 14 bits Assume 2 bytes per pad per sample Raw data per bunch train ~ 20M  5000  2 ~ 200GBytes ECAL total, per slab ~ 40MBytes, per ASIC ~ 1MByte This appears within ~1 ms; per ASIC ~ 1GByte/s Clearly do not want to ship out 200GBytes per bunch train Threshold around 2  gives ~ 1% output due to noise Results in ~ 1B samples above threshold per bunch train Completely dominates physics rate (TESLA TDR ~ 90MBytes per train) Each sample above threshold needs channel and timing label ~ 4 bytes Assume ~ 5GBytes ECAL total, per slab ~ 1MBytes

ECFA Sep 2004Paul Dauncey - ECAL DAQ9 Should suppression be done in ASIC? If no ADC or suppression in ASIC Need to transmit 14-bit precision signal several meters Need to do ~ 4000×5000 ~ 20M conversions at slab end in ~ 1ms; e.g. with 10 ADCs, need 2GHz, 14-bit ADCs… …or have analogue buffer to store all 20M analogue values and use slower ADCs Analogue suppression in ASIC without ADC? Severe problems with setting and monitoring threshold level, as well as its stability Assume ASIC has ADC; power is “only” downside If pads suppressed within the ASIC Need buffering to smooth out gaps or require ~ full-speed bandwidth Need configuration of threshold values, etc; must be non-volatile to not lose values during ASIC power cycling between bunch trains ASIC significantly more complex (and higher power?)

ECFA Sep 2004Paul Dauncey - ECAL DAQ10 Unsuppressed ASIC data If not suppressed within ASIC ASIC simplified; sends every sample immediately after digitisation All decisions on threshold, etc, made at slab end Allow histogramming, pedestal/noise monitoring, deconvolution, etc, in FPGAs at slab end But need ~ 1GByte/s rate; i.e. fibre Use of fibres for chip-to-chip data transfer is not (yet) standard How to connect fibre to ASIC? Part of ASIC itself? How to drive fibre? Too much power within slab? Mechanical size of laser driver? Too large to fit within layer gap? Use modulation with laser mounted at end of slab?

ECFA Sep 2004Paul Dauncey - ECAL DAQ11 Data transfer from slab If no cosmics required, rate from slab is slow ~ 1MBytes per bunch train read out within ~ 200ms means ~ 5MBytes/s Would probably use fibre anyway Smaller, electrically isolated Although wireless has been mentioned Issue is what is at the other end of the fibre? Custom receiver or generic network switch? Need to get data for a bunch train from all slabs to same location For trigger and reconstruction Generically called Event Builders here Assume ~ 1000 PCs (from TESLA TDR)

ECFA Sep 2004Paul Dauncey - ECAL DAQ12 Network switch Run TCP/IP directly in FPGA on slab end Allows data out to go directly to network switch Hence goes to event builders without any further layer Issue is network/switch performance Unlikely to have major buffering capability Would need to get all ~ 5GBytes of data to event builders before next bunch Network has to handle ~ 25GBytes/s continuously from ECAL alone Single event builder input ~ 25GBytes/s for 200ms from ECAL alone But then no data for ~ 200 seconds Alternative is to buffer data from multiple bunch trains at slab end Drip feed data from ~ 1000 trains simultaneously into the network Memory needed on slabs ~ 1000×1MByte ~ 1GByte on each slab Need to do some work on how this compares to LHC We (UK) have little expertise in this area

ECFA Sep 2004Paul Dauncey - ECAL DAQ13 Network switch (cont)

ECFA Sep 2004Paul Dauncey - ECAL DAQ14 Custom receiver Go directly to a PCI card sitting in a off-the-shelf PC PCI card would need to be custom Around 4 slab output fibres per PCI card; total of 1200 PCI cards Around 4 PCI cards per PC Total of ~ 300 PCs needed; each receives ~ 20MBytes per bunch train PCs act as (cheaper) buffering to network Much easier to upgrade and maintain; also gives CPU power Data rates are reasonable Rate into PCI card is ~ 4  5MBytes/s ~ 20MBytes/s Rate onto PCI bus is ~ 4  20MBytes/s ~ 80MBytes/s Should be straightforward with new PCI standards (PCI Express…) Need to buffer train data during transfer to event builders Store ~ 1000 trains, need ~ 1000  20MBytes ~ 20GBytes

ECFA Sep 2004Paul Dauncey - ECAL DAQ15 Custom receiver (cont) PCI card would also act as fast control distribution card Driver for fast control fibre mounted on it also Timing, clock, control and configuration comes from PCI card PC determines configuration Fast control system determines timing and control

ECFA Sep 2004Paul Dauncey - ECAL DAQ16 Custom receiver (cont) Data rate out of PC is uncertain PC could do a lot of preprocessing of data… …but each only sees ~0.3% of the ECAL so significant pattern recognition not very easy Will certainly do histogramming, monitoring, etc. Difficult to estimate how much of the ~ 5GBytes per bunch train sent out over network to event builders If all, then still need network capable of handling an average ~ 25GBytes/s Each individual connection can be significantly slower Assuming effectively no suppression Rate from each PC = rate on PCI bus ~ 80MBytes/s

ECFA Sep 2004Paul Dauncey - ECAL DAQ17 Custom receiver (cont)

ECFA Sep 2004Paul Dauncey - ECAL DAQ18 Cosmics? Is there a requirement for cosmic data? Monitor pedestal and noise changes? Alignment? For ~ 20 usable pad hits per cosmic track, need ~ 1M cosmics to average one hit per pad With a cosmic rate of ~ 100Hz, this takes 10ksecs ~ 3 hours “Simplest” method is to run fake bunch train data-taking between real bunch trains Need longer ASIC shaping time to give multiple samples per waveform; allow reconstruction of time and peak for arbitrary time of cosmics At least 3 samples/waveform; shaping time ~ 3  bunch crossing ~ 500ns Slower shaping also helps reduce intrinsic signal/noise; do anyway? Changes requirements for getting data off slab Need to not rely on power-down between bunch trains to reduce cooling requirements Need to speed up readout of bunch train data

ECFA Sep 2004Paul Dauncey - ECAL DAQ19 Cosmics (cont) E.g. for 10% cosmic duty factor This only gives ~ 1 hit/pad per day Front end (wafers and ASICs) have 10% power duty factor Up from ~ 0.5%, makes cooling more difficult Need to read slab within a time equal to 10  the bunch train length Transfer time 10ms not 200ms Data rate from slab now ~ 100MBytes/s, not ~ 5MBytes/s Still easily within rate possible for fibre Data rates through PCI cards and into PCs also up by 20  Rate on PCI bus ~ 2GBytes/s; unclear if feasible Need to suppress most data in PCs; otherwise 500GBytes/s onto network Probably kills network idea without significant slab buffering (20GBytes) Important to determine whether cosmics are needed Significant change to DAQ system requirements

ECFA Sep 2004Paul Dauncey - ECAL DAQ20 Future plans Identified some areas which need work Some of it is bringing us up to speed on what exists Some areas we already know definitely need R&D UK groups plan to write proposal for ECAL DAQ Sub-bid within next round of CALICE funding Cover various of the R&D items discussed here Submit early next year Need to firm up ideas between now and then Input, collaboration, etc, very welcome