LHC BLM Software audit June 2008.

Slides:



Advertisements
Similar presentations
Post Mortem Workshop - discussion1 17/1/2007 HWC Requirements GMPMA  Concentrate on pre-beam requirements Post-quench analysis ([semi-] automatic) Integrity.
Advertisements

MCITP Guide to Microsoft Windows Server 2008 Server Administration (Exam #70-646) Chapter 14 Server and Network Monitoring.
Hands-On Microsoft Windows Server 2008 Chapter 11 Server and Network Monitoring.
CH 13 Server and Network Monitoring. Hands-On Microsoft Windows Server Objectives Understand the importance of server monitoring Monitor server.
Windows Server 2008 Chapter 11 Last Update
E. Hatziangeli – LHC Beam Commissioning meeting - 17th March 2009.
Proposal for Decisions 2007 Work Baseline M.Jonker for the Cocost* * Collimation Controls Steering Team.
LHC Collimation Project Integration into the control system Michel Jonker External Review of the LHC Collimation Project 1 July 2004.
LHC BLM Software revue June BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
FGC Upgrades in the SPS V. Kain, S. Cettour Cave, S. Page, J.C. Bau, OP/SPS April
Eva Barbara Holzer MPP, CERN July 31, Eva Barbara Holzer, CERN MPP CERN, July 31, 2009 PROCEDURES FOR CHANGES TO BLM SYSTEM SETTINGS DURING LHC.
Architectural issues M.Jonker. Things to do MD was a success. Basic architecture is satisfactory. This is not the end: Understanding of actual technical.
Nominal Workflow = Outline of my Talk Monitor Installation HWC Procedure Documentation Manufacturing & Test Folder Instantiation – NC Handling Getting.
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays Channel access priorities Portable server replacement of rsrv.
Real time performance estimates of the LHC BPM and BLM system SL/BI.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
IPOC Software for LBDS Data Acquisition & Analysis LBDS Technical Audit CERN - AB/BT/EC - N.Magnin.
CO Timing Review: The OP Requirements R. Steerenberg on behalf of AB/OP Prepared with the help of: M. Albert, R. Alemany-Fernandez, T. Eriksson, G. Metral,
The Dashboard The Working Conditions The Common Chapters State What’s Next.
E. Hatziangeli – LHC Beam Commissioning meeting - 3 rd March 2009.
MPE Workshop 14/12/2010 Post Mortem Project Status and Plans Arkadiusz Gorzawski (on behalf of the PMA team)
H2LC The Hitchhiker's guide to LSA Core Rule #1 Don’t panic.
7/8/2016 OAF Jean-Jacques Gras Stephen Jackson Blazej Kolad 1.
LHC Post Mortem Workshop - 1, CERN, January 2007 (slide 1/52) AB-CO Measurement & Analysis Present status of the individual.
Event Sources and Realtime Actions
Progress Apama Fundamentals
All-Hands Meeting Outcome and Discussion
Software for Testing the New Injector BLM Electronics
Beam Wire Scanner (BWS) serial link requirements and architecture
RF acceleration and transverse damper systems
0v1.
LHC Beam Commissioning Meeting V. Kain & R. Alemany
A monitoring system for the beam-based feedbacks in the LHC
Data providers Volume & Type of Analysis Kickers
WP18, High-speed data recording Krzysztof Wrona, European XFEL
DRY RUNS 2015 Status and program of the Dry Runs in 2015
2007 IEEE Nuclear Science Symposium (NSS)
WWW and HTTP King Fahd University of Petroleum & Minerals
Distributed Database Management Systems
Chapter 3: Process Concept
TBPM Front-End Software Design Review
Operating System.
Hands-On Microsoft Windows Server 2008
Database involvement in Timing
Controlling a large CPU farm using industrial tools
Existing Perl/Oracle Pipeline
MPI Point to Point Communication
Admin Console for Glassfish v2
Magnet Safety System for NA61/Shine
LHC BLM system: system overview
Use of Multiple Devices
Offline Analysis Framework STATUS
Development of built-in diagnostics in the RADE framework (EN2746)
Real-time orbit the LHC
Enterprise Service Bus (ESB) (Chapter 9)
OPERATING SYSTEMS PROCESSES
Distributed BI Systems
Combiner functionalities
BLM settings management in LSA
Dry Run 0 Week 13: BI test of...everything circulating beam: all fixed displays and BI applications OP directories for 2009, concentrators running Need.
LHC Beam Loss Monitoring System
LHC dry-runs (BE-BI view)
Beam Synchronous Acquisition on IOC
Banafsheh Hajinasab Based on presentation by K. Strnisa, Cosylab
Chapter 13: I/O Systems.
Configuration of BLETC Module System Parameters Validation
LHC Systems Providing Post-Mortem Data
Kaj Rosengren FPGA Designer – Beam Diagnostics
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

LHC BLM Software audit June 2008

BLM Software components Handled by BI Software section Expert GUIs Not discussed as part of this audit Real-Time software Topic for this presentation Handled by OP / CO Fixed displays Settings management Operational GUIs

Real-Time software architecture Based on FESA (AB standard framework for real-time software), which basically provides… Real-time action scheduling In the case of the BLMs, most actions are triggered by events from the central timing system Plus some special triggers from the collimation system Communication mechanism (server actions) with clients GET, SET and SUBSCRIBE Basic configuration using NFS xml files Internal memory organisation All variables and constants shared between real-time actions and server actions are formally defined in the design. The BLM design has > 450 such variables!

Real-time action scheduling - Acquisition Triggered by the 1Hz tick Acquires the 12 running sum data from the 16 DABs (Digital Acquisition Boards). Also reads the currently active thresholds for each monitor Reads all the thresholds and monitor names from the non-volatile memory This may be removed as it is quite heavy at 1Hz! Gets status from the 16 DABs + status from the combiner card Copies some data / status info from the 16 DABs to the combiner card when the combiner sets a test flag Keeps a history of the last 512 running sums (used later in the post-mortem data) Presently 1 Acquisition action handles everything, but this may be changed to have individual actions to allow accurate diagnostics and benchmarking No change in functionality though!

Real-time action scheduling - Capture Started by operators sending the capture event Buffers are actually started using BI’s BST timing which is triggered by the capture event 2048 packets with 2 different capture modes – SLOW (2.54ms running sum) and FAST (40us running sum) The control room can SET this mode to SLOW or FAST before sending the capture event On the CPU, 2 local events are created based on the capture event SLOW – triggered on capture event + 6 seconds FAST – triggered on capture event + 90mS One real-time action woken up by these 2 local events (+90ms & +6s) Uses the ‘data-ready’ flag on the acquisition cards to determine if data capture is complete If the capture mode is set to FAST, this flag will be set at capture event + ~82ms Otherwise, the flag will be set at capture event + ~5.2 seconds

Real-time action scheduling - Beam Dump (XPOC) & Post Mortem Unlike Capture buffers which are started on demand, the Beam Dump & Post Mortem buffers are frozen on demand Beam Dump buffers contain 200*40us samples Using the BST, an additional 4ms delay is applied to the Beam Dump trigger Therefore we measure 4ms before the beam dump and 4ms after the beam dump Post Mortem buffers contain 2048*40us samples BST delay is 4ms Therefore we measure ~78ms before the beam dump and 4ms after the beam dump The 2 corresponding real-time actions are triggered at least 4ms after the respective event from the central timing The Beam Dump action notifies clients that data is ready The Post Mortem action pushes the data to a post-mortem server

Real-time action scheduling - Parallelization of the actions The 4 real-time actions are potentially fighting for CPU time The 1Hz acquisition is synchronous The Capture/Beam Dump and Post Mortem acquisitions are asynchronous The post mortem data transfer is quite time consuming (>1 second) so causes holes in the Acquisition (i.e. we miss an acquisition) Not acceptable! FESA allows us to prioritise actions The Capture/Beam Dump/Post Mortem data will stay there until we’ve read it so it takes least priority! So in our case, priority is: 1. Acquisition 2. Collimation data (not discussed today) 3. Capture data 4. Beam Dump data 5. Post Mortem data

Server (client-side) actions Implemented actions (a.k.a. Properties) include 4 GET actions to compliment the 4 main real-time actions Acquisition Capture Beam Dump Post Mortem 2 additional actions, to GET expert data (not intended for OP) BLETCExpertAcquisition BLECSExpertAcquisition An action to GET and SET the threshold tables and monitor names / details Special action that inhibits all real-time actions to avoid corruption of data An action to GET and SET the configuration of the combiner card May be integrated with the action for the thresholds in the future? As with the real-time actions, we also have the possibility to prioritise these server actions Basically, make sure that the post-mortem action has lower priority than all the others! And the Acquisition has the highest priority

Configuration Depending on its configuration, the front-end controls what is sent to the logging, fixed displays, operational GUIs, etc – it’s very important! FESA provides persistency using NFS files Deemed not reliable enough so configuration data stored on non-volatile memory on the acquisition cards Instead, a server action accepts the new threshold data and monitor names, and sends this to the non-volatile memory For a trace of what was sent, a copy of every new configuration is stored in a directory on NFS The same system is used for the combiner card configuration The source of this configuration is handled by the LSA settings management Plus MCS (critical settings) And some dedicated BLM tables for the data

Finally, some statistics (less is better) Real-time processes 25 instances of the BLM software accessing up to 16 acquisition cards and a combiner card + 2 extra instances deployed in CMS 25 (+2?) instances of the BOBR (managed by BI SW) and LTIM (managed by CO) software All instances for the LHC have been configured and deployed Lines of code FESA framework C++ 17’908 lines XML 18’553 lines Perl 4’842 lines Java 39’941 lines BLM specific code C++ 6’210 lines Java 8’860 lines XML (design) 3’566 lines We rely heavily on FESA!

More details On the LIDS page Link found on BI SW’s homepage Details of the BLMLHC design Details of the BLMLHC deployment … and much more Also watch the LHC Technical Board Wiki Link also found on our homepage