2 July 2003Paul Dauncey - DAQ1 Progress on CALICE DAQ Paul Dauncey Imperial College London, UK
2 July 2003Paul Dauncey - DAQ2 Event rate of >= 1 kHz during spill, >=100 Hz average DHCAL may require rate limited to ~300 Hz Event sizes of up to 40 kBytes; implies 40 MBytes/s peak Read all data without zero suppression (except DHCAL) Read out ECAL, (A/D)HCAL, trigger, beam line monitoring (Potentially) separate crates, (potentially) different technologies Flexible configuration to work in several beam lines Minimise dependence on external networking, etc. Also must be able to run ECAL and HCAL separately during initial tests Need to take many different types of runs Beam data, beam interspersed with pedestals, calibration, cosmics, etc. DAQ requirements
2 July 2003Paul Dauncey - DAQ3 ECAL VME readout electronics Being provided by the UK Still hoping ECAL readout boards can be used for AHCAL Boards are in layout at present; first two prototypes should be fabricated within next month or so Still aiming for all final boards to be produced by April 2004 If to be used for AHCAL, need decision (and money!) by February 2004 to be included in final production Trigger Definitely needed but no group yet identified to provide it Note, trigger-VME interface built into ECAL readout boards Slow controls No group yet identified to provide it No requirement for sophisticated system; do we need anything? Not included in this talk
2 July 2003Paul Dauncey - DAQ4 Many unknowns; keep flexible Plug-and-play components to be bolted together later as required Simple and robust data structure Keep all information in one place; run file is self-contained All configuration data used stored within file Eases merge with simulation and analysis formats Allow arbitrarily complex run structure Number and type of configurations completely flexible within a run Triggers within and outside of spills can be different and can be identified offline Implementation POSIX-compliant C++ running on Linux PCs VME using VME-PCI interface, VME software based on HAL ROOT for graphics and (probably) eventual persistent data storage Prototype: concept
2 July 2003Paul Dauncey - DAQ5 Need to store C++ objects in type-safe but flexible way “Record” (generalised event; includes StartRun, EndRun, etc) and “subrecords” (for ECAL, HCAL, etc.) Prototype: data structure Simple data array with identity for run-time type- checking Type-checking through simple id-to-class list Prevents misinterpretation of subrecord Record and subrecord handling completely blind to contents Arbitrary payload
2 July 2003Paul Dauncey - DAQ6 All parts of DAQ driven round finite state machine Nested layers within run allow arbitrary numbers of configurations E.g. allows beam data, pedestals, beam data, pedestals… E.g. allows calibration at DAC, setting 0, setting 1, setting 2… Prototype: state machine
2 July 2003Paul Dauncey - DAQ7 Prototype: data transfer Data movement via standardised interface (DIO) Within PC; each interface driven by separate thread, copy pointer only PC-to-PC; via socket (with same interface), copy actual data Standardised interface allows configuration of data handlers to be easily changed Flexibility to optimise to whatever configuration is needed
2 July 2003Paul Dauncey - DAQ8 For tests, assume worst case; each subsystem (ECAL, HCAL, beam monitoring) read out with separate PC Require one socket-socket branch for each Prototype: topology Each branch can read out separate technology (VME, PCI, etc) Monitor does not necessarily sample all events; its buffer allows through events only when spare CPU available
2 July 2003Paul Dauncey - DAQ9 First version of data structure software exists Records and subrecords; loading/unloading, etc. Arbitrary payload (templated) for subrecords First version of data transport software exists Buffers, copiers, mergers, demergers, etc. Arbitrary payload (templated) with specialisation for records First version of run control software exists GUI already shown Both automatic (pre-defined) and manual run structures These work together; sustained rates achieved: >10 kHz with empty events >1kHz with large events Depends critically on network between the PCs on the different branches Prototype: status
2 July 2003Paul Dauncey - DAQ10 VME access SBS 620 VME-PCI interface board will arrive within a few weeks Will base VME interface on Hardware Access Library (CERN/CMS) as there is significant experience of this in Imperial Data and configuration classes Until VME board interfaces defined, cannot finalise data format for event data or for board configuration data Output data format Currently have ASCII and binary (endian specific) output formats Assume best would be ROOT; actual objects stored, can be used interactively, easy graphics, machine-independent, etc. Will need to convert raw data format to zero-suppressed analysis data format Online monitoring Will be done via ROOT memory map facility (TMapFile); allows interactive real-time histogramming Major items still to be done
2 July 2003Paul Dauncey - DAQ11 MIDAS (PSI) No experience of using this in UK Written for ~MByte data rates, ~100 Hz event rates, single PC systems Limited state diagram; no ability to take different types of events in run A lot of baggage (databases, slow controls) makes more complex than required C, not C++, so less natural interface downstream (and not type-safe) XDAQ (CERN/CMS) Significant experience of this in Imperial; useful to have experts on hand Optimised for CMS; no beam spill structure and asynchronous trigger and readout but easily deals with CALICE event rates and data sizes Needs further investigation If moving to an existing system, XDAQ seems more suitable Beware of “3am crash” issue; it is hard to debug code written by other people in a hurry… Alternatives: MIDAS? XDAQ?
2 July 2003Paul Dauncey - DAQ12 Prototype DAQ system exists Covers all CALICE requirements so far Still several major items to be done Main thing to be defined is VME data structure Depends on FPGA firmware so will not necessarily be finalised even when the hardware is available Other existing DAQ systems could be studied further Most of remaining items would need to be done whichever system is used Schedule seems straightforward; limited by hardware See no obvious problems with being ready for beam test in 2004 Conclusions