Download presentation
Presentation is loading. Please wait.
Published byLucinda Wilkerson Modified over 9 years ago
1
David Abbott - Jefferson Lab DAQ group Data Acquisition Development at JLAB
2
DAQ Group now stands at 5 members. Recent Experiments have begun to test limits of the current distribution of CODA (v 2.5). Aging technologies (software and hardware) are being retired and replaced both for support of the 6 GeV program as well as development of 12 GeV. Continue to use open standards and minimize the use of commercial software while maximizing use of commercial hardware. We continue to focus on a “migration” plan from CODA2 to CODA3 (v 2.6). Data Acquisition Status
3
General DAQ Issues… Front-end hardware is evolving. Real-time intelligence is moving from the CPU to FPGAs. Old hardware technologies are no longer commercially supported (FASTBUS). CPU-Based real-time readout on a per event basis limits the maximum accepted L1 trigger rate (~10 KHz). 32 crate limit on the trigger distribution system is nearly reached in Hall B. Event transport limitations in the current CODA architecture are being seen for moderately complex systems. Computing platform and OS changes (Muli-core, more memory, 64 bit systems etc…) are not taken advantage of. Aging software technologies and reliance on third party packages are making code portability and upkeep difficult. Monitoring and control of large numbers of distributed objects are not handled in a consistent way (too many protocols). “Slow” controls only minimally supported
4
CODA3 - Requirements/Goals Pipelined Electronics (FADC, TDC) –Dead-timeless system –Replacement for obsolete electronics –Eliminate large numbers of delay cables Integrated L1/L2 Trigger and Trigger Distribution System –Support up to 200 KHz L1 Trigger –Use FADC for L1 trigger input –Support 100+ crates Parallel/Staged Event Building –Handle ~100 of input data streams –Scalable (>1 GByte/s) aggregate data throughput L3 Online Farm –Online (up to x10) reduction in data to permanent storage Integrated Experiment Control –DAQ RunControl + “Slow” control/monitoring –Distributed, scalable, and “intelligent”
5
Proposed GlueX DAQ System
6
Current DAQ Projects Components: CODA Objects CODA ROC CODA EMU (EB/ER/ANA) Run Control Software Tools: cMsg ET EVIO Config and Display GUIs Hardware: FADC/F1TDC Trigger Interface (VME/PCI) Trigger/Clock Distribution Commercial Module Support R&D: Embedded Linux Experiment Control Staged/Parallel Event Building 200KHz Trigger/readout Clock distribution L3 Farm
7
Front-End Systems VME CPU VME CPU - (MV6100) PPC, GigE, vxWorks (GE V7865) Intel, GigE, Linux CODA ROC Readout ~160-200 MB/s Trigger Interface Trigger Interface - (V3) Pipeline Trigger Event Blocking Clock distribution Event ID Bank Info F1 TDC Flash ADC R&D to support fully pipelined crates capable of 200 KHz trigger rates
8
VXS - L1 Trigger VME CPU -??? Intel, GigE Linux CODA ROC VME Readout of Event Data Switch Sum and Trigger Distribution Modules (VXS) Collect Sums/Hits Pass Data to Master L1 Clock distribution Trigger Distribution Flash ADC Use VXS High speed serial backplane (P0) to collect Energy sum and hit data from FADCs Flash ADC P0
9
Building a DAQ System ROC To EMU To File To ET To User Process
10
Event Distribution ET provides efficient transport of Data for building, and provides flexible User access EMU provides easy configuration, and User specific processing options
11
Staged/Parallel Event Building Divide total throughput into N streams (1GB/sec -> N*xMB/sec). Two stages - Data Concentration -> Event Building. Two stages - Data Concentration -> Event Building. Each EMU is a software component running on a separate host. Each EMU is a software component running on a separate host.
12
AFECS - Integrated Experiment Control FIPA Java-Based (v 1.5) “Intelligent” agents Extensions provide runtime “distributed” Containers (JVM). Agents provide a customizable intelligence (state machine) and communication (cMsg, CA, SNMP etc…) with external processes. Many independent “logical” control systems can operate within the platform. System is scalable. Agents can migrate to JVM containers on different nodes at runtime. System tested: 3 Containers on different hosts with 1000 Agents controling 1000 physical components distributed over 20 other nodes. ~40% CPU and 200 MB usage for each JVM
13
CODA ROC CODA EMU EPICS IOC 1 EPICS CAG Trigger soft Trigger hard Online ANA A A A A A A A S S S S S NRNANC Physical Components Normative Agents Supervisor agent Grand supervisor Front-End ACC AFECS Platform (Java 1.5+) IPC WEB IPC Hierarchy of Control IPC
14
CODA Evolves cMsg AFECS CODA3
15
CODA 2.6 Features Integrate new AFECS Run Control System cMsg IPC replaces RC (ROC,EB,ER) communication as well as CMLOG message logging. Various Support for newer operating systems and compilers –vxWorks 5.5, 6+ –RHEL 4, Solaris 10, OS X New and Updated tools –ET upgraded, 64bit compliant –Db2cool –EVIO package Support for new CODA3 Objects and components Integration of long time bug fixes, new driver libraries and feature enhancements
16
Summary The DAQ Group must support ALL experimental programs at JLAB. The current group must grow by at least 2 FTEs soon to manage current timelines. CODA 2.6 is available now, and will provide an integration path for CODA 3 technologies. Much DAQ software development is dependent on custom hardware development in order satisfy many 12GeV requirements. Current DAQ projects reflect the philosophy that we can progress to support the physics of the 12 GeV program through an evolution of the existing proven system.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.