“Post Mortem” during Beam Operation Why me ? I have no general responsibility for post-mortem Within CO, I did only little work on it The questions asked.

Slides:



Advertisements
Similar presentations
André Augustinus 15 March 2003 DCS Workshop Safety Interlocks.
Advertisements

Post Mortem Workshop - discussion1 17/1/2007 HWC Requirements GMPMA  Concentrate on pre-beam requirements Post-quench analysis ([semi-] automatic) Integrity.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 2: Computer-System Structures Computer System Operation I/O Structure Storage.
WORKFLOW IN MOBILE ENVIRONMENT. WHAT IS WORKFLOW ?  WORKFLOW IS A COLLECTION OF TASKS ORGANIZED TO ACCOMPLISH SOME BUSINESS PROCESS.  EXAMPLE: Patient.
Summary DCS Workshop - L.Jirdén1 Summary of DCS Workshop 28/29 May 01 u Aim of workshop u Program u Summary of presentations u Conclusion.
Overview of Machine Protection for the LHC
Event Management & ITIL V3
Proposal for Decisions 2007 Work Baseline M.Jonker for the Cocost* * Collimation Controls Steering Team.
André Augustinus 10 September 2001 DCS Architecture Issues Food for thoughts and discussion.
TE-MPE-EP, RD, 06-Dec QPS Data Transmission after LS1 R. Denz, TE-MPE-EP TIMBER PM WinCC OA Tsunami warning:
CE Operating Systems Lecture 3 Overview of OS functions and structure.
BCT for protection LHC Machine Protection Review April 2005 David Belohrad, AB-BDI-PI
HC Review, May 2005 Hardware Commissioning Review Hardware Commissioning Review Quality Assurance and Documentation of Results Félix Rodríguez Mateos,
Status Report – Injection Working Group Working group to find strategy for more efficient start-up of injectors and associated facilities after long stops.
1 BROOKHAVEN SCIENCE ASSOCIATES Storage Ring Commissioning Samuel Krinsky-Accelerator Physics Group Leader NSLS-II ASAC Meeting October 14-15, 2010.
LHC BLM Software revue June BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
Overview of DAQ at CERN experiments E.Radicioni, INFN MICE Daq and Controls Workshop.
Jean Slaughter –April 5, 2004 – all experimenters’ meeting Purposes of SDA Store to store monitoring of performance Long term trends Analysis of specific.
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Samy Chemli – Configuration Management - S. Chemli EN-MEF – Contents Configuration Management Hardware Baseline Change Management.
PostMortem Workshop January LHC “Post Mortem” Workshop: Introduction Initiative by Robin Lauckner, Adriaan Rijllart and myself, helped by many other.
Signals between LHC Machine & ALICE: an Update David Evans ALICE TB 23 rd February 2006 (with input from Detlef Swoboda)
Metadata By N.Gopinath AP/CSE Metadata and it’s role in the lifecycle. The collection, maintenance, and deployment of metadata Metadata and tool integration.
Fermilab February 17, 2003Recycler BPM Front-end1 Duane C. Voy
CE Operating Systems Lecture 2 Low level hardware support for operating systems.
Nominal Workflow = Outline of my Talk Monitor Installation HWC Procedure Documentation Manufacturing & Test Folder Instantiation – NC Handling Getting.
‘Review’ of the machine protection system in the SPS 1 J. Wenninger BE-OP SPS MPS - ATOP 09.
Real time performance estimates of the LHC BPM and BLM system SL/BI.
Software development Control system of the new IGBT EE switch.
1 The ILC Control Work Packages. ILC Control System Work Packages GDE Oct Who We Are Collaboration loosely formed at Snowmass which included SLAC,
1 Future Circular Collider Study Preparatory Collaboration Board Meeting September 2014 R-D Heuer Global Future Circular Collider (FCC) Study Goals and.
1 Commissioning and Early Operation – View from Machine Protection Jan Uythoven (AB/BT) Thanks to the members of the MPWG.
LHC Injection Sequencing MD 16/23/2009 Injection sequencing / BCM R, Giachino, B. Goddard, (D. Jacquet), V. Kain, M. Meddahi, J. Wenninger.
Post Mortem Workshop Session 4 Data Providers, Volume and Type of Analysis Beam Instrumentation Stéphane Bart Pedersen January 2007.
PM System Architecture Front-Ends, Servers, Triggering Ingredients Workshop on LHC Post Mortem Session 1 – What exists - PM System, Logging, Alarms Robin.
Chamonix 2006, B.Dehning 1 Commissioning of Beam Loss Monitors B. Dehning CERN AB/BDI.
1 J. Mourao (TE/MPE/CP) Enhanced DQHDS functionality  Status for 2011  Increase Magnet diagnostic capabilities  Our proposals.
AB/CO Review, Interlock team, 20 th September Interlock team – the AB/CO point of view M.Zerlauth, R.Harrison Powering Interlocks A common task.
DIAMON Project Project Definition and Specifications Based on input from the AB/CO Section leaders.
Transfer Line Test preparation meeting V. Kain, R. Alemany.
16-17 January 2007 Post-Mortem Workshop Logging data in relation with Post-Mortem and archiving Ronny Billen AB-CO.
Final Report – Injector Re- Commissioning Working Group (IRWG) Working group to find strategy for more efficient start-up of injectors and associated facilities.
Beam time structures 1 At any particular instance of time there will be only one kind of beam in the MI. It will be either protons or anti-protons. The.
CONTENT: Beam characteristics and MP concerns BI configuration Operational settings Collimators Planning Shift breakdown Thanks to: P.Baudrenghien, G.Bellodi,
CO Timing Review: The OP Requirements R. Steerenberg on behalf of AB/OP Prepared with the help of: M. Albert, R. Alemany-Fernandez, T. Eriksson, G. Metral,
Progress with Beam Report to LMC, Machine Coordination W10: Mike Lamont – Ralph Assmann Thanks to other machine coordinators, EIC’s, operators,
The Dashboard The Working Conditions The Common Chapters State What’s Next.
LHC machine protection close-out 1 Close-out. LHC machine protection close-out 2 Introduction The problem is obvious: –Magnetic field increase only a.
LHC RT feedback(s) CO Viewpoint Kris Kostro, AB/CO/FC.
MPE Workshop 14/12/2010 Post Mortem Project Status and Plans Arkadiusz Gorzawski (on behalf of the PMA team)
The LHCb Online Framework for Global Operational Control and Experiment Protection F. Alessio, R. Jacobsson, CERN, Switzerland S. Schleich, TU Dortmund,
ESS Timing System Plans Timo Korhonen Chief Engineer, Integrated Control System Division Nov.27, 2014.
LHC Post Mortem Workshop - 1, CERN, January 2007 (slide 1/52) AB-CO Measurement & Analysis Present status of the individual.
RF acceleration and transverse damper systems
Data providers Volume & Type of Analysis Kickers
R. Denz, A. Gomez Alonso, AT-MEL-PM
SLS Timing Master Timo Korhonen, PSI.
the CERN Electrical network protection system
Initial Experience with the Machine Protection System for LHC
Machine Protection Xu Hongliang.
LHCCWG Meeting R. Alemany, M. Lamont, S. Page
Interlocking of CNGS (and other high intensity beams) at the SPS
LHC BLM Software audit June 2008.
The LHC Beam Interlock System
Interlocking strategy
Operation of Target Safety System (TSS)
Review of hardware commissioning
What systems request a beam dump? And when do we need them?
Close-out.
Presentation transcript:

“Post Mortem” during Beam Operation Why me ? I have no general responsibility for post-mortem Within CO, I did only little work on it The questions asked by Karl-Hubert are addressed to CO, but are also question to other groups I have been involved at an early state in the initial discussions on post- mortem For machine protection – post mortem is vital I am pushing inside (and outside) CO to take it up…….. Rüdiger Schmidt Review on Controls September 2005 SCADA Days 1999

POST MORTEM - From Latin After death; it is an examination made of a dead body to ascertain the cause of death ; an inquisition post mortem is one made by the coroner. Post Mortem Set Dr. Bill Shepard Dudley (1880) What is “Post Mortem” ? What is “Post Mortem” ?

Who is dead ? The LHC ? The magnets ? The beam

about 8 m L.Bruno concrete shielding beam absorber (graphite) Dumping an LHC beam up to 800 C

First priority: “Post – Mortem” is required to ensure correct operation of LHC protection systems after every beam dump, to ensure that operation can rely on correct functioning of all protection sytems. This includes analysis of data from transient recording, logging and alarms sytems Second priority: “Post – Mortem” is required to understand the causes of any kind of accident (that should not happen) Accident: take LHC out for some hours (quench) …. some years

Report from the LHC Machine Protection Review May 10th, 2005 A comprehensive post mortem data acquisition capability after a beam dump is crucial in ensuring efficient operations. The Committee suggests that the various sub-system post mortem requirements should be defined centrally rather than determined ad hoc as seems to be happening at present. The basic operation concept of the machine protection system requires comprehensive post mortem data acquisition and analysis as well as automatic mandatory self-tests to ensure and re-qualify the anticipated safety level of the system. Those functions require tight coordination between the machine protection and the overall accelerator control system. Technical post- mortem procedures have to be designed, implemented and tested and practicable operational sequences have to be specified and executed. Post mortem requirements on the sub-systems need to be centrally determined rather than defined ad hoc. Based on the presentations the committee was not able to review the software interfaces and software methods involved in any detail. The Committee is concerned that the remaining time until the first beam tests might be insufficient to implement all relevant applications and services.

Data recording: what type of recording ? Transient recording Stores transient data for certain variables after an event or after an internal fault of a system Logging System Stores logging data for certain variables Collects data, typically at 1 Hz or slower, in regular intervals or on change Alarm System Handles alarms in case of fault conditions and stores alarm data Defines & processes alarm data for certain variables Shot-by-Shot Logging System (designed to monitor LHC filling ) Stores logging data for certain variables for each extraction from SPS to TT40 / TI8 (after an event …the shot) Variables can be stored as one data point, or as transient data with many data points Name and time stamp for entities required Post Mortem analysis is using data from all these systems

What has been done…. controls Logging System operational Alarm System operational Shot-by-Shot Logging System (designed to monitor LHC filling) operational, included recording of transients Time stamping: Timing system connected to the instrumentation for all systems Time stamping for beam related measurements by timing system down to ns accuracy, if required Time stamping for slower processes either with PLCs (~1ms accuracy) or via WorldFIP (better than 1 ms)

We have these systems, but they must be fed with data this is the only way to validate them Transient data not yet well covered for LHC

What has been done…. CO and other groups Some documentation The LHC Post Mortem System, LHC-Project Note 303, 10/2002 (E.Ciapala, F.Rodriguez-Mateos, R.Schmidt, J.Wenninger) What do we see in the control room? (Chamonix 12, 2003) (R.Lauckner) DRAFT: POST MORTEM ASYNCHRONOUS TRANSIENT CAPTURE CLIENT INTERFACE, LHC-CP-ES-0001 rev 0.3, R.Lauckner Functional specifications for beam instrumentation including requirements for data recording (e.g. post mortem) Measurement of the Beam Losses in the LHC Rings (LHC-BLM-ES ) On the Measurements of the Beam Current, Lifetime and Decay Rate in the LHC Rings (LHC-BCT-ES ) Measurement of the Beam Transverse Distribution in the LHC Rings (LHC-B- ES ) Instrumentation for the LHC Beam Dumping System (LHC-B-ES High Sensitivity Measurement of the Beam Longitudinal Distribution of the LHC Beams (LHC-B-ES rev2.0) Measurement of the Beam Position in the LHC Main Rings (lhc-bpm-es-0004v2)(lhc-bpm-es-0004v2)

What has been done…. beam interlocks When the beams are dumped (either programmed or after a failure), the Beam Interlock System will: generate an event to trigger transient recording for all systems that are connected to the timing system and know this event time stamp all beam abort request signals from connected user systems to establish the exact time sequence of what user requests a beam dump at what time This will allow some analysis … what system originally requested a beam dump what other systems would have triggered a beam dump shortly later This would also work if the PM trigger by the timing should not get out This was working during last years TT40 / TI8 tests … however, this is only a very small part of what is required

The tools we had during the TT40 / TI8 run 2005 allowed us to reconstruct the cause of the beam accident CERN-AB-Note TT40 Damage during 2004 High Intensity SPS Extraction Goddard, B; Kain, V; Mertens, V; Uythoven, J; Wenninger, J;

What is missing It is very useful to write functional specifications with requirements, but the work is not finished then What can we expect from the different systems, and when? How to get the data into a central repository ? How to store and manage data ? How to correlate data ? How to analyse data ? Issues Details on event triggering Data formats Data volumes Data storage and management Data archiving Naming conventions

How to go on…. Objective is to arrive to a coherent system across the LHC ‘Post Mortem’ = Data recording and analysis for LHC accelerator commissioning and operation cannot be handled only by the controls group In general, the data is provided by the equipment groups. Most transient recording is done by hardware developed in the equipment groups However, in my view the controls group has the responsibility to put things together (tbd) Many building blocks to make a coherent system are available Post mortem for commissioning of electrical circuits very valuable, we should build on the experience and the competence

Role of the different teams Operation and accelerator physics formulating requirements helping with software, mainly to analyse the data Equipment groups fomulating requirements providing the front-end systems necessary for recording the data (HW) pushing their data up from their front-end systems (SW) Controls group providing and transporting triggers for transient recordering (timing) for a few systems, providing front end acquisition transmitting the data from the front-end systems to the servers storing and managing the data providing tools to visualise and to partially analyse data (pattern recognition), and to allow easy access to the data (for others to analyse the data)

Proposal Many systems and people need to work together This is an issue that cannot be covered in 30 Minutes I suggest to organise a mini-workshop (~1 day), to discuss what is the status of work in the equipment groups ? what is the status of work in CO ? what do others do ? how to go on ? After such a mini-workshop, we should decide how to co-ordinate the activities (Working Group, Project, Responsibilities, ….) – who is the coroner ? Advice and help from other people (from HERA, RHIC, TEVATRON) might be welcome

Conclusion LHC does not have a general system for recording transient data – this task is with the equipment groups Misconception: the controls group is responsible for all ‘Post Mortem’ issues As in other areas, collaboration between groups / teams is required – but this involves many teams progressive effort, starting with some main players Data analysis – endless effort…. more sophisticated analysis is an excellent task for PhD students, possibly fellows, …. For the moment, it is not the lack of manpower that stops progress, rather the lack of coordinated effort Via SACEC we got PM going for Hardware commissioning, same is required for beam commissioning

Acknowledgements R.Lauckner A.Rijllart J.Wenninger K.M.Mess R.Lauckner F.Rodriguez-Mateos E.Ciapala Many others were involved in the discussions

Functional Specification MEASUREMENT OF THE BEAM POSITION IN THE LHC MAIN RINGS 5.12 TRANSIENT RECORDING AND POST-MORTEM The BPM System shall be able to recognize two external events: total beam loss, partial beam loss, and take appropriate action, using the BPM memory’s as transient recorders. The memories corresponding to these two kinds of events should be separate to avoid any loss of information in case of a total beam loss. The actions to be carried out in case these events are received are under definition by the Post-Mortem Working Group, whose documentation should be consulted [post]. Provisionally, it is foreseen, in case of a total beam loss event, to freeze the BPM memory where trajectories are accumulated 124 turns after the trigger and retain the last 1024 values (900 before the trigger, 124 after), freeze the closed orbit buffer to record the last 1000 orbits before the trigger and 24 orbits after the trigger. back

Functional Specification MEASUREMENT OF THE TRANSVERSE BEAM DISTRIBUTION IN THE LHC RINGS 4.8 POST MORTEM During normal running, the beam circulating monitoring devices shall be able to recognize total beam losses and take appropriate action. Provisionally, it is foreseen that: a first circular buffer should store the rms beam sizes, beam position and tilt whenever possible measured every 20 ms over the last 20 s of beam. a second circular buffer should store the last measured individual bunch sizes, positions and tilt recorded over the last ten minutes (i.e. 10 set of values per bunches).

Functional Specification ON THE MEASUREMENT OF THE BEAM LOSSES IN THE LHC RINGS 9.8 POST-MORTEM ANALYSIS The signals of all monitors should be buffered for the last turns, such that they can be read out and analysed after a beam-dump. In addition, the average rates of all monitors should be easily available for time scales of a few seconds and 10 minutes before a beam-dump.

Functional Specification HIGH SENSITIVITY MEASUREMENT OF THE LONGITUDINAL DISTRIBUTION OF THE LHC BEAMS 5.6LOGGING, POST MORTEM During normal running, it is felt that a logging periodicity of the nominal bunch distributions of about one minute is adequate. Other low density distributions can be logged at a lower rate, apart from the abort gap population which needs to be checked at least every second. The goal of the post-mortem is to save the beam pattern prior to a beam dump. Of relevance are the beam intensities in each bucket. The details of the tail distributions are less important. To fulfil this goal, the standard-sensitivity mode data should be frozen in a circular buffer of depth 1 second. More data could be made available on request. The exact requirements in this domain need to be finalized by the Machine Protection Working Group.

Functional Specification ON THE MEASUREMENTS OF THE BEAM CURRENT, LIFETIME AND DECAY RATE IN THE LHC RINGS 6.7 POST MORTEM RECORDING For post mortem analysis, data will be stored in different buffers to be frozen by external events signalling a partial or total current loss. The exact requirements in this domain are not finalized yet. Provisionally, it can be foreseen that: a first circular buffer will store the beam current measured every 20 ms over the last 20 s of beam. A second circular buffer will store the turn by turn data measured by the bunchto bunch monitor on 1000 turns. During the initial running in period, storing thesum of all bunch currents will be sufficient. Later, when getting close to the nominal currents, it will be useful to store the individual bunch data. In order then to limit the necessary memory, proper sampling or storage strategy can be foreseen. a third circular buffer will store the last measured individual bunch currents, recorded every second, over the last minute.