Stefan Koestner IEEE NSS – San Diego 2006 IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Stefan Koestner on behalf of the LHCb Online Group.

Slides:



Advertisements
Similar presentations
The Control System for the ATLAS Pixel Detector
Advertisements

GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
Experiment Control Systems at the LHC An Overview of the System Architecture An Overview of the System Architecture JCOP Framework Overview JCOP Framework.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
Data Acquisition Software for CMS HCAL Testbeams Jeremiah Mans Princeton University CHEP2003 San Diego, CA.
Clara Gaspar, March 2006 LHCb’s Experiment Control System Step by Step.
The LHCb Online System Design, Implementation, Performance, Plans Presentation at the 2 nd TIPP Conference Chicago, 9 June 2011 Beat Jost Cern.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
Using PVSS for the control of the LHCb TELL1 detector emulator (OPG) P. Petrova, M. Laverne, M. Muecke, G. Haefeli, J. Christiansen CERN European Organization.
Designing a HEP Experiment Control System, Lessons to be Learned From 10 Years Evolution and Operation of the DELPHI Experiment. André Augustinus 8 February.
Normal text - click to edit RCU – DCS system in ALICE RCU design, prototyping and test results (TPC & PHOS) Johan Alme.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
André Augustinus 10 October 2005 ALICE Detector Control Status Report A. Augustinus, P. Chochula, G. De Cataldo, L. Jirdén, S. Popescu the DCS team, ALICE.
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
ALICE, ATLAS, CMS & LHCb joint workshop on
P. Chochula ALICE Week Colmar, June 21, 2004 Status of FED developments.
Bernardo Mota (CERN PH/ED) 17/05/04ALICE TPC Meeting Progress on the RCU Prototyping Bernardo Mota CERN PH/ED Overview Architecture Trigger and Clock Distribution.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Bruno Belbute, October 2006 Presentation Rehearsal for the Follow-up meeting of the Protocol between AdI and CERN.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
News on GEM Readout with the SRS, DATE & AMORE
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Clara Gaspar, December 2012 Experiment Control System & Electronics Upgrade.
A Super-TFC for a Super-LHCb (II) 1. S-TFC on xTCA – Mapping TFC on Marseille hardware 2. ECS+TFC relay in FE Interface 3. Protocol and commands for FE/BE.
Configuration database status report Eric van Herwijnen September 29 th 2004 work done by: Lana Abadie Felix Schmidt-Eisenlohr.
March 7th 2005 Stefan Koestner LHCb week ECS-tools for the CCPC/Tell1 (a tutorial in 3 acts): (1) CCPC/PVSS Interface: - few comments on the server - quick.
14 November 08ELACCO meeting1 Alice Detector Control System EST Fellow : Lionel Wallet, CERN Supervisor : Andre Augustinus, CERN Marie Curie Early Stage.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
Clara Gaspar on behalf of the ECS team: CERN, Marseille, etc. October 2015 Experiment Control System & Electronics Upgrade.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
LHCb Configuration Database Lana Abadie, PhD student (CERN & University of Pierre et Marie Curie (Paris VI), LIP6.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN) on behalf of the.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
M. Caprini IFIN-HH Bucharest DAQ Control and Monitoring - A Software Component Model.
Clara Gaspar, February 2010 DIM A Portable, Light Weight Package for Information Publishing, Data Transfer and Inter-process Communication.
ALICE. 2 Experiment Size: 16 x 26 meters (some detectors >100m away from IP) Weight: 10,000 tons Detectors: 18 Magnets: 2 Dedicated to study of ultra.
Real-time control using embedded micro-controllers Flavio Fontanelli, Giuseppe Mini, Mario Sannino, INFN Genoa Zbigniew Guzik, Institute for Nuclear Research.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
The Control and Hardware Monitoring System of the CMS Level-1 Trigger Ildefons Magrans, Computing and Software for Experiments I IEEE Nuclear Science Symposium,
Controlling a large Data Acquisition System using an industrial SCADA system Stefan Koestner (CERN) on behalf of the LHCb Online Group Modelling the Experiment.
PVSS an industrial tool for slow control
Slow Control and Run Initialization Byte-wise Environment
Slow Control and Run Initialization Byte-wise Environment
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN)
CMS – The Detector Control System
Controlling a large CPU farm using industrial tools
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
ECS-tools for the CCPC/Tell1 (a tutorial in 3 acts):
Control and operation of detector systems and their readout electronics in a complex experiment control system Stefan Koestner (CERN)
The LHCb Run Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
IEEE - Nuclear Science Symposium San Diego, Oct. 31st 2006
The LHCb High Level Trigger Software Framework
Tools for the Automation of large distributed control systems
The LHCb Front-end Electronics System Status and Future Development
TELL1 A common data acquisition board for LHCb
Presentation transcript:

Stefan Koestner IEEE NSS – San Diego 2006 IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Stefan Koestner on behalf of the LHCb Online Group (

Stefan Koestner IEEE NSS – San Diego 2006 Dedicated to ‘B-physics’: single arm forward spectrometer ( mrad) L = 2 x cm -2 s -1 (~1 evt/bx) bb-pairs (~100 kHz) _ fraction of interest just  efficient trigger necessary to enrich the data sample! (implemented in 2 levels) Ten sub-detectors with approx. 1.1 million readout channels in total average total event size (at average occupancy) is 35 kB (after ZS)

Stefan Koestner IEEE NSS – San Diego Mhz Collection and preprocessing  Multi-Event- Packages HLT Triggers Reconstruction Storage L0 -electronics L1-elcetronics PC-Farm L0 trigger (~ 1 MHz) 1 MHz readout (~ 35 GB/s) via Gigabit Ethernet L0 trigger: 4 μs fixed latency time synchronized to FE-chips (Calos, Muon & Pile-up Veto) HLT trigger: ~2kHz to tape Push-protocol supposing that following stage has enough buffer. Centralized throttle via TFC- system (synchronous). Raw Data: 70 MB/s

Stefan Koestner IEEE NSS – San Diego Mhz Collection and preprocessing  Multi-Event- Packages HLT Triggers Reconstruction Storage L0 -electronics L1-elcetronics PC-Farm Trigger ELectronics Level 1 board: FPGA based adopted by almost all subdetectors in total ~300 boards with ~400 registers & memory blocks

Stefan Koestner IEEE NSS – San Diego 2006 ADC conversion Synchronisation, reordering, pedestal subtraction Common mode suppression Zero Suppression Multi Event Packing IP Framing Gigabit Ethernet Link ECS interface via creditcard sized PC connected to 10/100 Ethernet Controls network separated from Data network isolated access path  robustness

Stefan Koestner IEEE NSS – San Diego ’’ - SM520PCX (Digitallogic) CCPC: i486 compatible microcontroller (AMD ELAN 520) Linux Kernel at 133 MHz reading from local bus 20 MB/s filesystem shared over server Access to board via gluecard over a PLX PCI9030 bridge: Parallel bus (8/16/32), 3 JTAG chains (2 MHz), 4 I 2 C lines and 9 GPIO lines low level libraries under SLC4 (maintenance)

Stefan Koestner IEEE NSS – San Diego 2006 A uniform and homogenous control system based on the ‘Joint Controls Project’ (JCOP) framework - tailored for LHCb needs: eases development as well as operation distributed and scalable system coherent interfaces Configuration, monitoring and operation of the ‘whole’ experiment: Data acquisition & trigger Detector Operations (DCS) Experimental Infrastructure Interaction with accelerator & CERN Safety System

Stefan Koestner IEEE NSS – San Diego 2006 The CERN’s common JCOP framework is built upon an industrial SCADA system: PVSSII data (registers or probes) is stored in ‘data points’ (=complex structures) connected to the PVSS internal memory-DB Change of these entries triggers a callback function Upon certain thresholds alerts can be launched Extended to allow for finite state modeling & hierarchical control: intuitive model of subsystem several levels of abstraction sub systems can run alone auto-recovery from errors

Stefan Koestner IEEE NSS – San Diego 2006 Interface from ECS to hardware realized with the ‘Distributed Information Management’ System (DIM) – a portable lightweight communication layer: DIM server running on CCPC publishes services to DIM Name Server (DNS) from where the client (ECS) can subscribe to it. Data exchange peer to peer from server to client. Client sends commands (write/read) – server updates services (data/status) Portability: Clients can be installed on any machine just specifying DNS node (no need to take care of connectivity) Robustness: If server crashes it can easily republish on DNS node

Stefan Koestner IEEE NSS – San Diego 2006 Introduced a tool to model hardware as PVSS- datapoints: hides diversity and complexity of the various hardware/bus types takes care of the communication with the Tell1 (using DIM) framework function calls by register name

Stefan Koestner IEEE NSS – San Diego 2006 Further abstraction as finite state machines (SMI++): defined transition from one state of the board into another with possibility to autorecover Device Units: act on hardware (data points) state can be triggered by hardware Control Units: send commands to children (DU) state transition when children change Firmware can be downloaded to board in state NOT_READY Configuration of board (mainly writing registers) via ‘recipes’ downloaded from Configuration Database (different settings for different run conditions)

Stefan Koestner IEEE NSS – San Diego 2006 An example for a Tell1 CU panel (under construction) states of children (DUs) define state of CU commands are propagated downwards partitioning of the system is a key feature (partitions can have their own trigger) clicking on device unit allows to open operator panels for the device (boards)

Stefan Koestner IEEE NSS – San Diego 2006 Screenshot of a Tell1 DU panel – operator interface: identifies board type evaluates malfunctioning parts of the board quick overview of important counters and registers reconfiguration of refresh rate

Stefan Koestner IEEE NSS – San Diego 2006 registers (=data points) can be connected to callback functions which allow to evaluate the functionality of subcomponents (inside Ctrl-script) error recovery or alerts can be implemented

Stefan Koestner IEEE NSS – San Diego 2006 many panels to monitor registers – grouped in sub panels panels for interaction and reconfiguration of registers

Stefan Koestner IEEE NSS – San Diego 2006 Presented the basic concepts for implementing the LHCb readout boards into the Experiment Control System with a focus on the various layers and frameworks used The system is currently under test for commissioning of the sub-detectors (refinements still ongoing) A generic design was of major importance to allow for the usage by different subdetectors and board types Further wizards to facilitate board implementation by non experts