The Software Framework available at the ATLAS ROD Crate

Slides:



Advertisements
Similar presentations
Goals and status of the ATLAS VME Replacement Working Group Markus Joos, CERN Based on slides prepared by Guy Perrot, LAPP 19/09/2012.
Advertisements

DCM Embedded Software Infrastructure, Build Environment and Kernel Modules A.Norman (U.Virginia) 1 July '09 NOvA Collaboration Mtg.
Avishai Wool lecture Introduction to Systems Programming Lecture 8 Input-Output.
June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
RPC Trigger Software ESR, July Tasks subsystem DCS subsystem Run Control online monitoring of the subsystem provide tools needed to perform on-
Figure 1.1 Interaction between applications and the operating system.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
E. Pasqualucci Software needed (H8) DAQ-1 –BE installed ROD emulator Beam crate readout process Crate controller and message system –Inherited by tile-cal.
- Software block schemes & diagrams - Communications protocols & data format - Conclusions EUSO-BALLOON DESIGN REVIEW, , CNES TOULOUSE F. S.
Chapter 3 Operating Systems Introduction to CS 1 st Semester, 2015 Sanghyun Park.
UNIX System Administration OS Kernal Copyright 2002, Dr. Ken Hoganson All rights reserved. OS Kernel Concept Kernel or MicroKernel Concept: An OS architecture-design.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
GBT Interface Card for a Linux Computer Carson Teale 1.
Computer Architecture Lecture10: Input/output devices Piotr Bilski.
Markus Joos, EP-ESS 1 “DAQ” at the El. Pool Aim and preconditions Hardware Operating system support Low level software Middle level software High level.
OPERATING SYSTEMS Goals of the course Definitions of operating systems Operating system goals What is not an operating system Computer architecture O/S.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Processes Introduction to Operating Systems: Module 3.
Mobile DAQ Testbench ‘Mobi DAQ’ Paulo Vitor da Silva, Gerolf Schlager.
NAos new Analog observation system Stephane Deghaye AB/CO/FC section meeting 22 January 2003.
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
LECC 2005, Heidelberg Markus Joos, CERN-PH/ESS 1 The VMEbus processor hardware and software infrastructure in ATLAS Markus Joos, CERN-PH/ESS 11th Workshop.
Latest ideas in DAQ development for LHC B. Gorini - CERN 1.
IPHC - DRS Gilles CLAUS 04/04/20061/20 EUDET JRA1 Meeting, April 2006 MAPS Test & DAQ Strasbourg OUTLINE Summary of MimoStar 2 Workshop CCMOS DAQ Status.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
CIS250 OPERATING SYSTEMS Chapter One Introduction.
Status & development of the software for CALICE-DAQ Tao Wu On behalf of UK Collaboration.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
AFP Trigger DAQ and DCS Krzysztof Korcyl Institute of Nuclear Physics - Cracow on behalf of TDAQ and DCS subsystems.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
Embedded Real-Time Systems Processing interrupts Lecturer Department University.
Software tools for digital LLRF system integration at CERN 04/11/2015 LLRF15, Software tools2 Andy Butterworth Tom Levens, Andrey Pashnin, Anthony Rey.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
TDAQ status and plans for 2008 Carlos Solans TileCal Valencia meeting 17th December 2007.
Introduction to Operating Systems Concepts
Online clock software status
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
Chapter 13: I/O Systems Modified by Dr. Neerja Mhaskar for CS 3SH3.
An Introduction to Operating Systems
Software Overview Sonja Vrcic
TDAQ Phase-II kick-off CERN background information
Communication Models for Run Control with ATCA-based Systems
Calicoes Calice OnlinE System Frédéric Magniette
SLC5 VME ROD based test system for Pixel DAQ: Outlook for Spring 2014
Online Control Program: a summary of recent discussions
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
DAQ for ATLAS SCT macro-assembly
HCAL Data Concentrator Production Status
ATLAS Local Trigger Processor
J.M. Landgraf, M.J. LeVine, A. Ljubicic, Jr., M.W. Schulz
CS 286 Computer Organization and Architecture
Read-out of High Speed S-LINK Data Via a Buffered PCI Card
Evolution of S-LINK to PCI interfaces
The PCI bus (Peripheral Component Interconnect ) is the most commonly used peripheral bus on desktops and bigger computers. higher-level bus architectures.
CSCI 315 Operating Systems Design
Level-1 Calo Monitoring
IS3440 Linux Security Unit 7 Securing the Linux Kernel
I/O Systems I/O Hardware Application I/O Interface
Operating Systems Chapter 5: Input/Output Management
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Control requirements from sub-detectors Results of the ECS to front-end interface questionnaire André Augustinus 18 May 2000.
Chapter 13: I/O Systems.
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

The Software Framework available at the ATLAS ROD Crate DISCLAIMER: Focus on common software Might be biased by Level-1 Central Trigger THANKS TO: M. Joos, G. Lehmann R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Outline Introduction The Architecture Model The Software Framework Ethernet – Single-Board Computer – VME The Software Framework A layered framework Outlook R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Trigger/DAQ System VME PC Level-1 Trigger Level-2 Trigger Event Filter Electronics + Firmware Readout Link (SLink) Level-2 Trigger + Event Filter: Computers + Networks + Software Level-1 Trigger VME RODs + Trigger Processors Level-2 Trigger PC Except ROIBuilder (VME) and ROBIN (SLink-PCI ) Event Filter R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

The Architecture Model VME User (ssh) Ethernet SBC config ROD ROD_BUSY TRIG PROC Sub-detector specific Diagnostics status & monitoring ATLAS Run Control (TCP/IP) main data flow Readout Driver Crate: Can also be a trigger processor or other front-end crate; can also contain modules other than RODs, e.g. LTP, TTCvi, ROD_BUSY, etc. Provides interactive use, detector-specific diagnostic tools and ATLAS run control through a single-board computer (SBC) VME is used for configuration, status, and monitoring; the event or trigger data use other links (e.g. SLink or dedicated backplanes). Transfer protocol (parallel memory bus): single register READ/WRITE memory/FIFO block transfer interrupts R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

The Software: A Layered Framework ROD Crate DAQ Low-level Software* Drivers and Libraries Operating System Hardware * Mainly sub-detector specific R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Framework: Hardware Provides common hardware infrastructure VME: Choice of crates, power supplies, and fans Common guidelines for the use of VME64x Common purchase & repair service (PH/ESE) Common “slow control”: ATLAS DCS over CANbus Single-board computers: A family of three generations of single-board computers: Intel-based central processing unit and PCI/X-VME bridge Additional Modules: ROD_Busy Module TTC Modules: TTCvi, TTCex, etc. LTP, LTPI Other modules available from the Electronics Pool which are used in a ROD Crate framework, e.g. at test beam and in lab tests, and for which common low-level software exists R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Framework: Operating System Provides common operating system and environment Common Service: Provided by CERN/IT & ATLAS TDAQ System Administrators Operating System  Scientific Linux CERN Boot configuration (in particular loading DAQ drivers) Security (at Point 1) System/network monitoring (at Point 1) R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Framework: Drivers&Libraries - 1 Provides common drivers and software infrastructure Common VME Driver (pure CERN product): Dynamically loaded driver with static mappings (vmetab file) Single READ and WRITE FAST: uses mapped I/O (ignore VME bus errors, requires SBC with h/w for byte swapping) SAFE: uses programmed I/O under driver control (catch VME bus errors) Slave mappings (SBC memory as slave on VME) To my knowledge not used Block transfer Allows several pipelined chains of block transfers. Interrupt handling Wait synchronously on semaphore or register POSIX signal with driver To my knowledge not often used Other features: CR/CSR access, SYSRST generation, SYSFAIL and bus error signalling, and debugging features Provides C library and C++ wrapper R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Framework: Drivers&Libraries - 2 Provides common drivers and software infrastructure Other drivers and libraries: PCI: Read/write memory I/O Link/unlink PCI devices Could be reused in an xTCA environment. Contiguous memory Kernel pages (or previously BIG PHYS area) Other libraries: “Helpers”: Errors codes, time stamps, bit strings, JTAG chains, simple menu programs, XML module templates, etc. And their tools for testing and debugging R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Framework: Low-level Software Provides sub-detector specific software Sub-detector specific diagnostic tools for early development or debugging Sub-detector specific calibration tasks detailed knowledge of the calibration and front-end system Libraries (C++) which “encapsulate” the specificities so that the ROD (or other modules) can be used from the next higher level This is probably the greater part of the software base. Common low-level software Common libraries for common modules, e.g. LTP, TTCvi, ROD_BUSY, etc. R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Framework: ROD Crate DAQ Provides Integration into ATLAS DAQ: Based on a skeleton program* (C++) which needs to be filled in with sub-detector specific calls + a set of common modules ready to be used. * A ReadoutApplication originally developed for the ROS DAQ Run Control: Distributed system of finite-state machines (controllers) Configuration Databases: Run parameters, calibration data, trigger configuration, etc. Monitoring: Status/error messages, status values, histograms, event monitoring, data quality, persistent information, etc. Ready-to-use Modules: Controller, configuration, and monitoring are readily available for common modules (e.g. ROD_BUSY) and other DAQ tasks Is based on the same software as the rest of the ATLAS TDAQ (ROS, L2, and EF). Have to see how that one evolves! R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

History From my recollection: Detector Interface Group (DIG): Define interface between sub-detectors and DAQ Define DAQ functionality required by sub-detectors TDAQ Interfaces with Front-end Systems Requirements Document + Readout Link (SLink) and Raw Event Format ROD Working Group (sub-group of DIG): Exchange of information on ROD hardware Define common solutions, e.g. VME hardware/driver & common modules ROD Crate DAQ Task Force (sub-group of DIG): Define common DAQ for the ROD crate → RCD Common solutions have proven to be useful in terms of effort for development and maintenance (in particular for smaller groups). R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012

Outlook What will replace the existing services/infrastructure? If we intend to replace VME: What will replace the existing services/infrastructure? Common solutions in terms of common hardware/software? Common service for purchase & repair? Common DCS? What architecture/protocol to use? Architecture: Based on SBC which distributes commands to modules? Common driver and library? Protocol: Single or grouped READ/WRITE? Block transfers? Interrupts? Performance: What latency for control and throughput for monitoring are required? Common ROD Crate DAQ (same for all sub-detectors? same as ROS DAQ)? Common modules? How to arrive at common solution? Do we need another ROD Working Group? What is the manpower required to develop and maintain common framework? And who can provide that manpower? Must not forget the VME legacy. It will be around until phase 2 and probably beyond. R. Spiwoks Possible Replacement of VME for the Upgrades of ATLAS, CERN, 12-JUL-2012