David Abbott CODA3 - DAQ and Electronics Development for the 12 GeV Upgrade.

Slides:



Advertisements
Similar presentations
Trigger & DAQ Ole Hansen SBS Collaboration Meeting 19 March 2010.
Advertisements

GlueX Collaboration Meeting June 3-5, GeV Trigger Electronics R. Chris Cuevas 1.Hardware Status  Production Updates 2.DAq and Trigger Testing 
Data Acquisition System for 2D X-Ray Detector Beijing Synchrotron Radiation Facility (BSRF) located at Institute of High Energy Physics is the first synchrotron.
Coupling an array of Neutron detectors with AGATA The phases of AGATA The AGATA GTS and data acquisition.
12 GeV Trigger Workshop Session II - DAQ System July 8th, 2009 – Christopher Newport Univ. David Abbott.
Trigger-less and reconfigurable data acquisition system for J-PET
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Data Acquisition Graham Heyes May 20th Outline Online organization - who is responsible for what. Resources - staffing, support group budget and.
Plans for EPICS in Hall D at Jefferson Lab Elliott Wolin EPICS Collaboration Meeting Vancouver, BC 30-Apr-2009.
Feb. 19, 2015 David Lawrence JLab Counting House Operations.
May. 11, 2015 David Lawrence JLab Counting House Operations.
Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Trigger Supervisor (TS) J. William Gu Data Acquisition Group 1.TS position in the system 2.First prototype TS 3.TS functions 4.TS test status.
SBS/A1n DAQ status Alexandre Camsonne August 28 th 2012.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
Hall A DAQ status and upgrade plans Alexandre Camsonne Hall A Jefferson Laboratory Hall A collaboration meeting June 10 th 2011.
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
DAQ Status Graham. EMU / EB status EMU framework prototype is complete. Prototype read, process and send modules are complete. XML configuration mechanism.
DAQ Status Report GlueX Collaboration – Jan , 2009 – Jefferson Lab David Abbott (In lieu of Graham) GlueX Collaboration Meeting - Jan Jefferson.
CODA Users Workshop (Data Acquisition at Jefferson Lab) By David Abbott.
Design and Performance of a PCI Interface with four 2 Gbit/s Serial Optical Links Stefan Haas, Markus Joos CERN Wieslaw Iwanski Henryk Niewodnicznski Institute.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
Solid SIDIS DAQ Solid collaboration meeting June 2 nd /3 rd 2011 Alexandre Camsonne.
L3 DAQ the past, the present, and your future Doug Chapin for the L3DAQ group DAQ Shifters Meeting 26 Mar 2002.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
1 Trigger and DAQ for SoLID SIDIS Programs Yi Qiang Jefferson Lab for SoLID-SIDIS Collaboration Meeting 3/25/2011.
Hall D Online Meeting 28 March 2008 Fast Electronics R. Chris Cuevas Group Leader Jefferson Lab Experimental Nuclear Physics Division System Engineering.
Gueorgui ANTCHEVPrague 3-7 September The TOTEM Front End Driver, its Components and Applications in the TOTEM Experiment G. Antchev a, b, P. Aspell.
Frank Lemke DPG Frühjahrstagung 2010 Time synchronization and measurements of a hierarchical DAQ network DPG Conference Bonn 2010 Session: HK 70.3 University.
Hall D Online Meeting 27 June 2008 Fast Electronics R. Chris Cuevas Jefferson Lab Experimental Nuclear Physics Division 12 GeV Trigger System Status Update.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
David Abbott - Jefferson Lab DAQ group Data Acquisition Development at JLAB.
FLASH Free Electron Laser in Hamburg Status of the FLASH Free Electron Laser Control System Kay Rehlich DESY Content: Introduction Architecture Future.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
12GeV Trigger Workshop Christopher Newport University 8 July 2009 R. Chris Cuevas Welcome! Workshop goals: 1.Review  Trigger requirements  Present hardware.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
Jefferson Laboratory Hall A SuperBigBite Spectrometer Data Acquisition System Alexandre Camsonne APS DNP 2013 October 24 th 2013 Hall A Jefferson Laboratory.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
Electronics for HPS Proposal September 20, 2010 S. Boyarinov 1 HPS DAQ Overview Sergey Boyarinov JLAB June 17, 2014.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
Guirao - Frascati 2002Read-out of high-speed S-LINK data via a buffered PCI card 1 Read-out of High Speed S-LINK Data Via a Buffered PCI Card A. Guirao.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
APEX DAQ rate capability April 19 th 2015 Alexandre Camsonne.
HPS TDAQ Review Sergey Boyarinov, Ben Raydo JLAB June 18, 2014.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Vladimir Zhulanov for BelleII ECL group Budker INP, Novosibirsk INSTR2014, Novosibirsk 2014/02/28 1.
Super BigBite DAQ & Trigger Jens-Ole Hansen Hall A Collaboration Meeting 16 December 2009.
A 250 MHz Level 1 Trigger and Distribution System for the GlueX Experiment David Abbott, C. Cuevas, E. Jastrzembski, F. Barbosa, B. Raydo, H. Dong, J.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
DAQ and Trigger for HPS run Sergey Boyarinov JLAB July 11, Requirements and available test results 2. DAQ status 3. Trigger system status and upgrades.
PRAD DAQ System Overview
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Trigger, DAQ, & Online: Perspectives on Electronics
J.M. Landgraf, M.J. LeVine, A. Ljubicic, Jr., M.W. Schulz
Example of DAQ Trigger issues for the SoLID experiment
The only thing worse than an experiment you can’t control is an experiment you can. May 6, 2019 V. Gyurjyan CHEP2007.
Presentation transcript:

David Abbott CODA3 - DAQ and Electronics Development for the 12 GeV Upgrade

Outline  What is CODA  DAQ Issues / Aging Technologies  Requirements for 12 GeV  CODA3 Developments  Software  Hardware  Summary

The CODA mission  CODA is a software toolkit with specialized hardware support  Modular software components use the network for inter-process communication and event transport.  Use open standards and minimize the use of commercial software while maximizing use of commercial hardware.  DAQ systems for each experimental Hall can be “built-up” from common components to fit their needs.

What is CODA

Data Acquisition Status  The Data Acquisition Group is charged with the task of providing a functional, unified acquisition system appropriate for ALL experimental programs at JLAB.  Recent experiments have begun to test the limits of the current CODA system.  Development in both hardware and software technologies will be necessary to provide continued support of the future 6GeV program.  Current DAQ projects reflect the philosophy that we can progress to support the physics of the 12 GeV program through an “upgrade” of the existing proven system.

Front-End Issues…  The CODA ROC is a customizable software object that provides a standard for the User to access front-end hardware - whatever that may be. It must evolve.  Obsolete FASTBUS digitizing electronics needs a replacement alternative (in VME). There are far fewer appropriate commercial solutions today.  Real-Time readout on a per event basis limits the maximum accepted L1 trigger rate (~10 KHz).  32 crate limit on the trigger distribution system is nearly reached in Hall B.  DAQ - Trigger - Electronics - All must be designed to work together, efficiently, “customized” !!

Other DAQ Limitations  Event transport limitations in the current CODA architecture are being seen for moderately complex systems.  Aging software technologies and reliance on third party packages are making code portability and upkeep difficult.  Monitoring and control of large numbers of distributed objects are not handled in a consistent way (too many protocols).  Slow controls only minimally supported ~30 ---> 1 EB ROC MB/sec KHz Trigger Hall B “bottlenecks”

A General Plan  Replace aging technologies  Run Control  Tcl/Tk based DAQ Components  mSQL  FASTBUS/CAMAC  Hall D (GlueX) requirements drive development direction  Maintain Cross platform support  Linux - Solaris - vxWorks - OS X - 64 bit…  Support and use new commercial advances where possible.  More custom hardware design will be required and an electronics support group to maintain.

CODA3 - Requirements/Goals  Pipelined Electronics (FADC, TDC)  Dead-timeless system  Replacement for obsolete electronics  Eliminate large numbers of delay cables  Integrated L1/L2 Trigger and Trigger Distribution System  Support up to 200 KHz L1 Trigger  Use FADC for L1 trigger input  Support 100+ crates  Parallel/Staged Event Building  Handle ~100 of input data streams  Scalable (>1 GByte/s) aggregate data throughput  L3 Online Farm  Online (up to x10) reduction in data to permanent storage  Integrated Experiment Control  DAQ RunControl + “Slow” control/monitoring  Distributed, scalable, and “intelligent”

Anatomy of a CODA3 DAQ System

Example - Hall D DAQ Existing Halls

Front-End Goals  L1 Trigger rate KHz  Block up Events (200 event block -> 2kHz)  Move some ROL runtime code to modules (FPGAs)  ADCs provide L1 trigger data ( hence we need a distributed high speed clock MHz  New Trigger Supervisor  Perhaps 100+ crates  Support pipeline, event blocking  Manage flow control into DAQ system backend

Front-End Systems VME CPU VME CPU -(MV6100) PowerPC, GigE vxWorks or Linux CODA ROC Readout ~160+ MB/s Trigger Interface Trigger Interface - (V3) Pipeline Trigger Event Blocking Clock distribution Event ID Bank Info F1 TDC Flash ADC R&D to support fully pipelined crates capable of 200 KHz trigger rates

Level 1 Trigger Distributed high speed clock  Distributed high speed clock derived from Accel clock 499 MHz derived from Accel clock 499 MHz stepped down at crate - inter crate jitter <50ps stepped down at crate - inter crate jitter <50ps  Subset of ROCs collect summed ADC data and send it to L1 Trigger - in sync and send it to L1 Trigger - in sync  12 bit sums/crate x 250MHz --> 3 Gbit/s links  Trigger decision goes to Trigger Supervisor for pipeline distribution to all crates for pipeline distribution to all crates

Staged/Parallel Event Building Divide total throughput into N streams (1GB/sec -> N*xMB/sec). Two stages - Data Concentration -> Event Building. Two stages - Data Concentration -> Event Building. Each EMU is a software component running on a separate host. Each EMU is a software component running on a separate host. 2N hosts are interconnected though one BIG switch

Event Management Unit EMU built around the ET system for customizable system for customizable processing/distribution of processing/distribution of event streams. event streams.Examples: Data Concentrator for ROCs Data Concentrator for ROCs Sub-Event builder Sub-Event builder Farm distribution point Farm distribution point Event Recorder Event Recorder User Processes can attach to User Processes can attach to any EMU in the system any EMU in the system

Level 3 Farm  Can be used for analysis or filtering  Support 100s of nodes  Nodes can come and go during event taking  Filtered events will be time ordered on the back-end. 1 GB/s 100 MB/s

Current DAQ Projects - Overview Components:  Run Control  Coda ROC  Coda EB/ER (EMU) Software Tools:  cMsg  ET  evio Hardware:  FADC/F1TDC  Trigger Interface (VME/PCI)  Trigger/Clock Distribution R&D:  Embedded Linux  Experiment Control  Staged/Parallel Event Building  VXS - High speed serial  200KHz Trigger/readout  Low jitter clock distribution

Some Current Projects  cMsg - homegrown publish-subscribe messaging which will become the basis for DAQ online communication  JAVA Agent-based Run Control  EMU - C/C++ based, modular, threaded  Event Blocking  Hardware - New TI to support existing Trigger Sup.  Software - new ROC, parallel readout lists, EVIO  Embedded Linux - establish a stable maintainable Linux distribution for SBCs with access to VME and PCI hardware.

Software Tools  cMsg - Homegrown publish-subscribe messaging - the basis for all DAQ online communication.  ET - Event Transport - High speed, shared memory based event distribution system. Also supports network transport.  EVIO - Library/Tools for reading, writing, building, analyzing and displaying events in the CODA framework.

cMsg - CODA Messaging System cMsg  Publish-subscribe messaging system  JAVA,C++,C implementation and APIs  Replaces older separate packages with one maintainable system

Experiment Control JVM Host 1 A ROC A HV RC  CODA Java-Based (v 1.5) “Intelligent” agents (AFECS)  JADE extensions provide a runtime “distributed” JVM.  Agents provide a customizable intelligence and communication with external processes. Host 1 Host 2 Host 3 AA ROCHV RC

CODA ROC CODA EMU EPICS IOC 1 EPICS CAG Trigger soft Trigger hard Online ANA A A A A A A A S S S S S NRNANC Physical Components Normative Agents Supervisor agent Grand supervisor Front-End ACC AFECS Platform IPC GUI/user IPC WEB IPC Hierarchy of Control IPC

Run Control GUI

Experiment Monitoring and Control

JLAB Pipeline TDC TDC ASIC (8 channels) Hits Trigger Time -->

JLAB Flash ADC FPGAs Hits Trigger Time --> 8µs

VME64X - VXS Interconnect J total pins 45 differential pairs 6 GHz Bandwidth 18 VME Payload Slots 2 Switching slots

Electronics Development

VXS - L1 Trigger VME CPU VME CPU -(V7865) Intel, GigE Linux CODA ROC VME Readout of Event Data Switch Sum and Trigger Distribution Modules (VXS) Collect Sums/Hits Pass Data to Master L1 Clock distribution Trigger Distribution Flash ADC Use VXS High speed serial backplane (P0) to collect Energy sum and hit data from FADCs Flash ADC P0

ATCA as a Standard Becoming a widely adopted Commercial standard 8U Blade Form-factor High-Power/High Availability Few applications in Physics yet and not particularly cheap Possible use for Master Trigger processor and Trigger Supervisor

Summary  CODA version 3 is now being molded - and can be adapted to future requirements!!  Our plan is to phase in new tools to provide a smooth transition from CODA2 --> CODA 3 (CODA 2.6 Release)  New custom electronics will be available for use soon.