Slow Control and Run Initialization Byte-wise Environment

Slides:



Advertisements
Similar presentations
Kondo GNANVO Florida Institute of Technology, Melbourne FL.
Advertisements

GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Linda R. Coney – 24th April 2009 Online Reconstruction & a little about Online Monitoring Linda R. Coney 18 August, 2009.
Microsoft ® Application Virtualization 4.5 Infrastructure Planning and Design Series.
WP2: Detector development Summary G. Pugliese INFN - Politecnico of Bari.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
New Features of APV-SRS-LabVIEW Data Acquisition Program Eraldo Oliveri on behalf of Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer] NYC,
Microsoft ® Application Virtualization 4.6 Infrastructure Planning and Design Published: September 2008 Updated: February 2010.
Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
Research on cloud computing application in the peer-to-peer based video-on-demand systems Speaker : 吳靖緯 MA0G rd International Workshop.
2012 IEEE Nuclear Science Symposium Anaheim, California S. Colafranceschi (CERN) and M. Hohlmann (Florida Institute of Technology) (for the CMS GEM Collaboration)
Experience with medium-size SRS for muon tomography Michael Staib Florida Institute of Technology.
IMPLEMENTATION OF SOFTWARE INPUT OUTPUT CONTROLLERS FOR THE STAR EXPERIMENT J. M. Burns, M. Cherney*, J. Fujita* Creighton University, Department of Physics,
1 Alice DAQ Configuration DB
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Online Monitoring and Analysis for Muon Tomography Readout System M. Phipps, M. Staib, C. Zelenka, M. Hohlmann Florida Institute of Technology Department.
Leo Greiner TC_Int1 Sensor and Readout Status of the PIXEL Detector.
Prediction W. Buchmueller (DESY) arXiv:hep-ph/ (1999)
Strip No Detection and Imaging of High-Z Materials with a Muon Tomography Station using GEM Detectors K. Gnanvo 1, W. Bittner 1, B. Benson 1, F. Costa.
A PCI Card for Readout in High Energy Physics Experiments Michele Floris 1,2, Gianluca Usai 1,2, Davide Marras 2, André David IEEE Nuclear Science.
WP4 STATUS AND OUTLOOK Hartmut Hillemanns TTN Meeting, December
Statement of Interest in CMS MPGD High-  Muon Upgrade Florida Tech CMS Muon Group Marcus Hohlmann Florida Institute of Technology, Melbourne, FL, USA.
Performance of a Large-Area GEM Detector Prototype for the Upgrade of the CMS Muon Endcap System Vallary Bhopatkar M. Hohlmann, M. Phipps, J. Twigger,
Dec.11, 2008 ECL parallel session, Super B1 Results of the run with the new electronics A.Kuzmin, Yu.Usov, V.Shebalin, B.Shwartz 1.New electronics configuration.
News on GEM Readout with the SRS, DATE & AMORE
Update on the Triple GEM Detectors for Muon Tomography K. Gnanvo, M. Hohlmann, L. Grasso, A. Quintero Florida Institute of Technology, Melbourne, FL.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
GEM Detector R&D at Florida Tech – “Get to know each other” –
Florida Institute of Technology, Melbourne, FL
 The zigzag readout board is divided into eight η-sectors; each sector has a length of ~12 cm and comprises 128 zigzag strips; zigzag strips run in radial.
A new web Slow Control & Monitoring system featuring SRS ZeroSuppression implementation StefanoC*, S. L. Goentoro F. Costa, A. Zybel *on behalf of FIT.
Large-Area GEM Detector Development at CMS, RD51, and Fl. Tech – A brief overview – Marcus Hohlmann Florida Institute of Technology, Melbourne, FL PHENIX.
Leo Greiner IPHC beam test Beam tests at the ALS and RHIC with a Mimostar-2 telescope.
FPD STATUS Carlos Avila Uniandes/UTA 1. FPD overview 2. Roman pot and detector status 3. FPD readout integration status 4. Software status 5. Stand-alone.
Abstract Beam Test of a Large-area GEM Detector Prototype for the Upgrade of the CMS Muon Endcap System V. Bhopatkar, M. Hohlmann, M. Phipps, J. Twigger,
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
Beam Test of a Large-Area GEM Detector Prototype for the Upgrade of the CMS Muon Endcap System Vallary Bhopatkar M. Hohlmann, M. Phipps, J. Twigger, A.
Construction and beam test analysis of GE1/1 prototype III gaseous electron multiplier (GEM) detector V. BHOPATKAR, E. HANSEN, M. HOHLMANN, M. PHIPPS,
Overview of PHENIX Muon Tracker Data Analysis PHENIX Muon Tracker Muon Tracker Software Muon Tracker Database Muon Event Display Performance Muon Reconstruction.
RD51 GEM Telescope: results from June 2010 test beam and work in progress Matteo Alfonsi on behalf of CERN GDD group and Siena/PISA INFN group.
The ALICE data quality monitoring Barthélémy von Haller CERN PH/AID For the ALICE Collaboration.
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Integration with ATLAS DAQ Marcin Byszewski 23/11/2011 RD51 Mini week Marcin Byszewski, CERN1.
Barthélémy von Haller CERN PH/AID For the ALICE Collaboration The ALICE data quality monitoring system.
Scalable Readout System Data Acquisition using LabVIEW Riccardo de Asmundis INFN Napoli [Certified LabVIEW Developer]
Summary of IAPP scientific activities into 4 years P. Giannetti INFN of Pisa.
CT-PPS DB Info (Preliminary) DB design will be the same as currently used for CMS Pixels, HCAL, GEM, HGCAL databases DB is Oracle based A DB for a sub-detector.
Slow Control and Run Initialization Byte-wise Environment
MPD Data Acquisition System: Architecture and Solutions
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
(On Behalf of CMS Muon Group)
Dept. of Physics and Space Sciences, Florida Institute of Technology
CMS High Level Trigger Configuration Management
A. Zhang, S. Colafranceschi, M. Hohlmann
DAQ for ATLAS SCT macro-assembly
Totem Readout using the SRS
ALICE – First paper.
HERD Prototype Beam Test
Preparation for CERN test beam
mmDAQ (Muon Atlas MicroMegas Activity – ATLAS R&D)
Conditions Data access using FroNTier Squid cache Server
Data Quality Monitoring of the CMS Silicon Strip Tracker Detector
Large CMS GEM with APV & SRS electronics
Characterization and test beam results of novel triple-GEM
Example of DAQ Trigger issues for the SoLID experiment
Commissioning of the ALICE-PHOS trigger
A. Zhang, S. Colafranceschi, M. Hohlmann
(On Behalf of CMS Muon Group)
Presentation transcript:

Slow Control and Run Initialization Byte-wise Environment A new Slow Control and Run Initialization Byte-wise Environment (SCRIBE) for the quality control of mass-produced CMS GEM detectors S. Colafranceschi (for the CMS muon group) Florida Institute of Technology, Physics and Space Sciences department, Melbourne, FL, USA Motivation Detector Mass Production Requirements The CMS Muon detector needs to maintain its current performance also during the Phase 2 upgrade. Assembly sites implemented a flexible detector production schema. The construction load is balanced among several assembly sites: (CERN, BARC, FIT, INFN-Bari, INFN-Frascati, U. GHENT, U. Delhi). Phase-II CMS will have to cope with high rate after next LHC shutdown (LS2). Redundancy must be increased in the muon endcap region to improve fault tolerance This will be achieved by installing new GEM detectors that feature high rate capability O(MHz/cm2), good time (≈ 8 ns) and space resolution O(250µm) for muon triggering and tracking. The GEM detector will be mass-produced in several research centers and laboratories. A common quality control procedure has been established in order to ensure standardization among construction sites. All sites will follow an extensive common step-by-step procedure with near identical test stands. Quality controls are implemented at all construction stages with mostly automatized procedures. The most stringent requirement is the detector uniformity during mass production. SCRIBE has been developed to standardize the detector uniformity quality control. SCRIBE integrates electronics configuration, data-taking and prompt event decoding, unpacking, and reconstruction Using SCRIBE with the RD51 Scalable Readout System (SRS), DATE (DAQ), and AMORE (Analysis) SCRIBE provides a web interface to support the entire experimental chain: electronics configuration, data taking, and data analysis. SCRIBE features an INTEGRATED ENVIRONMENT with: Automatic installation through RPM package (contact author if interested) Support any Linux distribution compatible with RPM Graphical user interface (GUI) through dynamic web-app (apache based) Multi-client, cross-platform, and cross-device Support of all SRS existing hardware (FEC versions: 1.1, 1.3, 3, 6) and firmware Real-time feedback when reading/writing any SRS register Near real-time event reconstruction with AMORE configured to run on multi-core SCRIBE functionalities and use-cases: 1) Front-End and DAQ configuration Declaration of Front End Concentrator (FEC) card IP addresses Reading/writing/monitoring control registers Pedestal configuration, if FEC supports zero suppression (ZS) Pedestal monitoring (if FEC supports ZS) -> SRS ready to take data 2) Data taking Metadata initialization: site location, detector variables, trigger type Run start/stop with automatic run number and metadata saving Single/multiple runs with/without any SRS memory scan -> SRS ready to deliver data to DAQ computer 3) Event unpacking, decoding, and reconstruction Declaration of AMORE reconstruction settings Automatic analysis of taken runs according to adopted use-case (single PC or cluster of PCs) Custom analysis via user-defined routine -> SRS raw data are processed Front-End and DAQ configuration Data Taking Event Reconstruction & Analysis Full access (r/w) to all memories Pedestal calibration (ZS) Channel mask Single/Multiple runs Memory scan (latency, threshold) AMORE framework online reconstruction CMS GEM Analysis Framework support (https://github.com/bdorney/CMS_GEM_Analysis_Framework) SCRIBE supports ZS and enhances DAQ and reconstruction performance by reducing data-stream bandwidth and reducing raw data file size. In addition SCRIBE supports AMORE parallel processing to achieve near real-time event reconstruction. DAQ Performance (24 APV front-end chips) AMORE reconstruction performance (24 APV front-end chips) Channels read out APV time bins Data streams kBytes/event Theoretical maximum data-stream rate Raw data file length (10M events) STANDARD 3072 27 100 ≈ 120 Hz 960 GBytes Zero Suppression ≈ 1-5 0.64-0.86 ≈ 18 kHz 4.3 GBytes Channels read out APV time bins Event processing rate (single Haswell 2.8GHz core) (8 Haswell 2.8GHz core) STANDARD 3072 27 50 Hz 250 Hz Zero Suppression ≈ 1-5 750 Hz 3.8 kHz SCRIBE at CMS GEM Assembly Sites Enables Detector Response Quality Control in 2 hours The detector response test consists of measuring the pulse height distribution over the entire active surface of a CMS GE1/1 detector. SCRIBE use-case at CMS GEM assembly sites: The GEM detector readout electronics (equipped with 24 APV25 front-end chips) is properly initialized. Pedestals are calibrated in each of the 3072 detector channels and values stored in the firmware Zero-suppressed data taking typically records 20M events (≈ 10Gbyte raw data). While DAQ is taking data, parallel event reconstruction starts to provide near-real-time feedback to user. SCRIBE general settings tab for FEC configuration/initialization SCRIBE Pedestal monitoring Conclusions: Better User Experience and Performance, Quicker Learning Curve for SRS Users ZS performance allows to reduce data-taking and analysis time from several weeks to a couple of hours per chamber. Without such functionality CMS GEM assembly sites would not be able to complete mass production in time for installation. Graphical web user-experience provides ease of use and modularity to users running simple/complex setups. SCRIBE learning curve allows new users to use SRS from initialization to data-analysis in a few minutes. For more info: scolafranceschi@fit.edu IEEE Nuclear Science Symposium & Medical Imaging Conference • Strasbourg, France • Oct. 29 – Nov. 6, 2016