June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.

Slides:



Advertisements
Similar presentations
Network II.5 simulator ..
Advertisements

First Tests of a GTL-Prototype-Card using XDAQ S. Kostner, J. Strauss and A. Taurok (Hephy, Vienna) with help from J. Gutleber (CERN)
Status of the CTP O.Villalobos Baillie University of Birmingham April 23rd 2009.
Kondo GNANVO Florida Institute of Technology, Melbourne FL.
SciFi Tracker DAQ M. Yoshida (Osaka Univ.) MICE meeting at LBNL 10.Feb.2005 DAQ system for KEK test beam Hardware Software Processes Architecture SciFi.
Supervision of Production Computers in ALICE Peter Chochula for the ALICE DCS team.
The LAr ROD Project and Online Activities Arno Straessner and Alain, Daniel, Annie, Manuel, Imma, Eric, Jean-Pierre,... Journée de réflexion du DPNC Centre.
ALICE Trigger System Features Overall layout Central Trigger Processor Local Trigger Unit Software Current status On behalf of ALICE collaboration:D. Evans,
CMS Week Sept 2002 HCAL Data Concentrator Status Report for RUWG and Calibration WG Eric Hazen, Jim Rohlf, Shouxiang Wu Boston University.
Status of the Optical Multiplexer Board 9U Prototype This poster presents the architecture and the status of the Optical Multiplexer Board (OMB) 9U for.
Readout of TPC with modified ALICE electronics details of current version and pending items ALICE overview New software based on homemade partly existing.
ISOC Peer Review - March 2, 2004 Section GLAST Large Area Telescope ISOC Peer Review Test Bed Terry Schalk GLAST Flight Software
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Linux Operations and Administration
Use of ROOT in the D0 Online Event Monitoring System Joel Snow, D0 Collaboration, February 2000.
Proposal of new electronics integrated on the flanges for LAr TPC S. Cento, G. Meng CERN June 2014.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
DAQ System at the 2002 ATLAS Muon Test Beam G. Avolio – Univ. della Calabria E. Pasqualucci - INFN Roma.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
SCADA. 3-Oct-15 Contents.. Introduction Hardware Architecture Software Architecture Functionality Conclusion References.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
TRIGGER-LESS AND RECONFIGURABLE DATA ACQUISITION SYSTEM FOR POSITRON EMISSION TOMOGRAPHY Grzegorz Korcyl 2013.
GBT Interface Card for a Linux Computer Carson Teale 1.
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
Markus Joos, EP-ESS 1 “DAQ” at the El. Pool Aim and preconditions Hardware Operating system support Low level software Middle level software High level.
Status of NA62 straw electronics and services Peter LICHARD, Johan Morant, Vito PALLADINO.
MICE CM25 Nov 2009Jean-Sebastien GraulichSlide 1 Detector DAQ Issues o Achievements Since CM24 o Trigger o Event Building o Online Software o Front End.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
LNL 1 SLOW CONTROLS FOR CMS DRIFT TUBE CHAMBERS M. Bellato, L. Castellani INFN Sezione di Padova.
1 Online Calibration of Calorimeter Mrinmoy Bhattacharjee SUNY, Stony Brook Thanks to: D. Schamberger, L. Groer, U. Bassler, B. Olivier, M. Thioye Institutions:
Offline shifter training tutorial L. Betev February 19, 2009.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Bernardo Mota (CERN PH/ED) 17/05/04ALICE TPC Meeting Progress on the RCU Prototyping Bernardo Mota CERN PH/ED Overview Architecture Trigger and Clock Distribution.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
5 Dec., 2001 A summary of system tests at BNL related to the ROD presented by Kin Yip From a user point of view : Description of the system of the test.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Status of NA62 straw electronics Webs Covers Services Readout.
News on GEM Readout with the SRS, DATE & AMORE
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
NTOF DAQ status D. Macina (EN-STI-EET) Acknowledgements: EN-STI-ECE section: A. Masi, A. Almeida Paiva, M. Donze, M. Fantuzzi, A. Giraud, F. Marazita,
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
© 2006 EMC Corporation. All rights reserved. The Host Environment Module 2.1.
CF 16/Feb/20031 H8 Beam Test in 2003 ─ Preparation status TGC-Japan electronics group.
1 Calorimeters LED control LHCb CALO meeting Anatoli Konoplyannikov /ITEP/ Status of the calorimeters LV power supply and ECS control Status of.
Status & development of the software for CALICE-DAQ Tao Wu On behalf of UK Collaboration.
TDAQ Experience in the BNL Liquid Argon Calorimeter Test Facility Denis Oliveira Damazio (BNL), George Redlinger (BNL).
Week1: Introduction to Computer Networks. Copyright © 2012 Cengage Learning. All rights reserved.2 Objectives 2 Describe basic computer components and.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
AFP Trigger DAQ and DCS Krzysztof Korcyl Institute of Nuclear Physics - Cracow on behalf of TDAQ and DCS subsystems.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
July 10, 2001 Status Report of the ROD Testing at BNL Kin Yip Activity update of the ROD system at BNL: DAQ-1 with trigger controller Data corruption testing.
Rutherford Appleton Laboratory September 1999Fifth Workshop on Electronics for LHC Presented by S. Quinton.
R. Fantechi 2/09/2014. Milestone table (7/2014) Week 23/6: L0TP/Torino test at least 2 primitive sources, writing to LTU, choke/error test Week.
17 Apr 2002 MCLARG Electronics Meeting News HEC Preshaper PRR passed on Thursday Apr 11 LECC02 (Workshop on LHC Electronics) will be held in Colmar, France.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
ATF/ATF2 Control System
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
The Software Framework available at the ATLAS ROD Crate
Offline shifter training tutorial
Presentation transcript:

June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full FE crate test  In this talk, I will describe the overall system setup cover various software components and report their status and/or what we intend to do Kin Yip

June 19, 2002 Trigger Tower Board Read  Out Card TTC Control PTG veto Data (through optical link) trigger Signal from a pulser (triggered by TTC) FE Crate memory PU Host 2 Host 1 “Host 2” — single board in the same crate as the Read Out Card — is a diskless node booted from “Host 1” through the network ~VME Calib. board FEB DAQ-1

June 19, 2002 Control Crate (Wiener VME with CERN extension) To control : Workstation  Control Crate  configure various boards in the FEC By using a PCI/VME bridge “Bit3”, the PCI bus on the workstation “maia” and the remote VMEbus in the Control Crate share memory and I/O  Programmed IO (PIO)  Dynamic Memory Access (DMA) We have upgraded the operating system and the software driver for Bit3 (now from SBS). We have tested :  PIO :  3 MBytes per second  DMA : Mbytes per second  the obvious way to go PTG (Pulse Trigger Generator, BNL-made) has been used to generate triggers in this new set of OS and Bit3 driver. Other electronic components including TTC (with TTCvx and TTCvi) and the SPAC will have to be integrated into this system.

June 19, 2002 Read-Out Crate [Wiener VME (9U/6U) ] Different from before, the CPU (VMIC) board is in the same crate as the electronic boards (2 Read-Out Cards) Similarly, there is also a PCI/VME bridge “Tundra-Universe” that we have used to allow the CPU board to communicate with the electronic boards through the VME backplane We have also upgraded the operating system and the software driver for this PCI/VME bridge. We have also tested :  DMA : Mbytes per second  PIO : almost the same as above We will have to develop the software to configure and read from the two Read-Out Cards when they are available, presumably with the help from the board maker  in a similar way that we have done with the ROD Demo Board

Two controllers in two different crates

Controlling trigger rate

June 19, 2002 Data volume and storage A very rough estimate : No. of channels ~ 16  128 = channels  2 K bytes 16 FEB  32K bytes per event In a very rough estimation, if we take about 100 K events a day for 5 months, we will end up with ~500 GB of data. We’ll use Magda (a distributed data manager prototype for Grid- resident data developed at BNL) to manage data transfer and storage We have tested and transferred data from our workstation through the USATLAS cluster to the HPSS (High Performance Storage System) at BNL. The automatic procedures require two endless loops, one in our workstation (the one connected to the Control Crate) and one in the USATLAS cluster that has the appropriate read/write privilege from/to the HPSS If desirable, we can replicate the data from BNL to CERN (Castor) which is said to have a cost of 2 SF per Gbyte.

June 19, 2002 Event Monitoring in DAQ-1 Basically, the “Event Sampler” process/interface in DAQ-1 gets the data and pass the data to the “Monitoring Task” process/interface  The “Monitoring Task” would unpack the data and analyze to produce, say, (Root) histogram and then  use the “Histogram Provider” to publish the histograms  The “User Histogram Task” would “receive” the histogram so that any user can examine

June 19, 2002

Possible realistic monitoring plots

June 19, 2002 Data format will be essentially whatever the Read-Out Card maker provides Each run will start with a new file and the run no. is part of the filename We expect to have some configuration information in the header/trailer For Channel mapping, we want to put the mapping in the database and I have started with the one in Athena We have to take care of all the hardware components such as FeedThrough, preamplier, motherboard etc. Anaysis code in the framework of a simple C program will materialize at the debugging stage, as we need to check whether the data read out is correct, just like what happened to the ROD Demo exercise For the general users, we provide the I/O unpacking routine and 3 stage skeleton interface, namely, “initialization, execution and finalization” so that the users can develop their analysis code easily in this framework Data format, channel mapping and analysis

Runbook, Bookkeeping and DCS Through the Web and Database server, we will provide the “Runbook” from which users may search for the system configuration for each run. We will set up a simple report logging system for the “run shifters” to write down their concern or any special features or problems at certain run or time. We will probably use the OBK (Online BookKeeing) feature in the DAQ-1 as it has easy access to all the run information. The OBK experts have promised to provide an updated version which provides a Web-based interface. In any case, the information will be available through the Web server The DCS (Detector Control System) measurements taken from the FEC will be done asynchronously with respect to the rest of data acquisition We have sent a PC to CERN and the DCS software system is being set up We have to figure out what parameters we need to measure The DCS information will be transferred to the Database and Web servers so that it is readily available to all users