TE-MPE-EP, RD, 26-Aug-2015 1 Workshop on QPS Software Layer Hardware / Agents R. Denz, TE-MPE-EP.

Slides:



Advertisements
Similar presentations
System Integration and Performance
Advertisements

Supervision of Production Computers in ALICE Peter Chochula for the ALICE DCS team.
Chapter 19: Network Management Business Data Communications, 4e.
1 ITC242 – Introduction to Data Communications Week 12 Topic 18 Chapter 19 Network Management.
IO Controller Module Arbitrates IO from the CCP Physically separable from CCP –Can be used as independent data logger or used in future projects. Implemented.
DCS LEB Workshop ‘98, Rome, Detector Control System, H.J.Burckhart,1 Detector Control System H.J Burckhart, CERN u Motivation and Scope u Detector and.
TE-MPE-EP, RD, 26-Jul Quench Heater Supervision for MB Status of System Integration R. Denz TE-MPE Technical Meeting July 26 th Acknowledgements:
Data Center Infrastructure
ANSALDO: BACKGROUND experience in dependable Signalling Automation Systems experience in dependable Management Automation Systems experience in installation,
TE-MPE-EP, VF, 11-Oct-2012 Update on the DQLPU type A design and general progress. TE-MPE Technical Meeting.
ITER – Interlocks Luis Fernandez December 2014 Central Interlock System CIS v0.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
Protecting the Public, Astronauts and Pilots, the NASA Workforce, and High-Value Equipment and Property Mission Success Starts With Safety Believe it or.
Proposal for Decisions 2007 Work Baseline M.Jonker for the Cocost* * Collimation Controls Steering Team.
OPERATING SYSTEMS Goals of the course Definitions of operating systems Operating system goals What is not an operating system Computer architecture O/S.
André Augustinus 17 June 2002 Technology Overview What is out there to fulfil our requirements? (with thanks to Tarek)
TE-MPE-EP, RD, 06-Dec QPS Data Transmission after LS1 R. Denz, TE-MPE-EP TIMBER PM WinCC OA Tsunami warning:
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
TE-MPE-CP, RD, 23-Nov Summary of Radiation Induced QPS Events in LHC 2010 R. Denz TE-MPE-CP.
B. Todd et al. 25 th August 2009 Observations Since v1.
Changes in QPS R. Denz, TE-MPE-EP MPP workshop Acknowledgements: K. Dahlerup-Petersen, V. Froidbise, S. Georgakakis, B. Magnin, C. Martin, J.
LHC Cryogenics Control: INTEGRATION OF THE INDUSTRIAL CONTROLS (UNICOS) AND FRONT-END SOFTWARE ARCHITECTURE (FESA) APPLICATIONS Enrique BLANCO Controls.
NanoFIP Progress Report Eva. Gousiou BE/CO-HT & the nanoFIP team RadWG 23Nov11.
LHC Collimation Project Integration into the control system Michel Jonker External Review of the LHC Collimation Project 1 July 2004.
Computer Organization & Assembly Language © by DR. M. Amer.
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays - done for rsrv in 3.14 Channel access priorities - planned to.
Training LHC Powering R. Denz Quench Protection System R. Denz AT-MEL.
TE-MPE-CP, RD, 09-June Enhanced Diagnostics & Supervision for Quench Heater Circuits R. Denz TE-MPE-CP.
AT-MEI-PE, RD, LIUWG 31-JUL R. Denz AT-MEI-PE LHC Luminosity Upgrade Protection of the Inner Triplet, D1, Correctors and Superconducting Links/Leads.
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays Channel access priorities Portable server replacement of rsrv.
Real time performance estimates of the LHC BPM and BLM system SL/BI.
Software development Control system of the new IGBT EE switch.
QPS R2E Status R2E Internal Meeting, October 24 th 2013Discussion 1 LS1 developments Digital Quench Protection insertion region magnets Delivery/installation.
TE-MPE-CP, RD, 06-Oct Radiation Induced Faults in QPS Systems during LHC run 2011 R. Denz TE-MPE Technical Meeting October 6 th.
5 June 2002DOM Main Board Engineering Requirements Review 1 DOM Main Board Software Engineering Requirements Review June 5, 2002 LBNL Chuck McParland.
GAN: remote operation of accelerator diagnosis systems Matthias Werner, DESY MDI.
Unit 17: SDLC. Systems Development Life Cycle Five Major Phases Plus Documentation throughout Plus Evaluation…
TE-MPE-EP New DQLPU type A Production of 1300 new protection units TE-MPE-EP, VF, 23-Nov-2012.
TE/MPE activities currently planed for YETS Reiner Denz, Knud Dalherup-Petersen, Markus Zerlauth & Bruno Puccio YETS coordination meeting.
TE-MPE-CP, RD, LHC Performance Workshop - Chamonix Feb R. Denz TE-MPE-CP on behalf of the QPS team QPS Upgrade and Re-commissioning.
Quench Detection System R. Denz TE-MPE-EP on behalf of the QPS team.
EDMS Session 2 - Operation at high intensity of the CERN machines ATC/ABOC 21 January ARCON – RAMSES Bridge ATC/ABOC Days CERN – 21 to 23.
1 J. Mourao (TE/MPE/CP) Enhanced DQHDS functionality  Status for 2011  Increase Magnet diagnostic capabilities  Our proposals.
AB/CO Review, Interlock team, 20 th September Interlock team – the AB/CO point of view M.Zerlauth, R.Harrison Powering Interlocks A common task.
Conclusions on UPS powering test and procedure I. Romera Acknowledgements: V. Chareyre, M. Zerlauth 86 th MPP meeting –
QPS high level controls LabView tools, an overview.
Machine Protection Review, R. Denz, 11-APR Introduction to Magnet Powering and Protection R. Denz, AT-MEL-PM.
MPP Workshop Status of Powering Interlocks I. Romera on behalf of MPE-MS MPP Workshop, 12 June 2015, I. Romera (TE-MPE)1.
TE-MPE-CP, RD, 12-Dec QPS - analysis of main problems, areas to target, possible improvements R. Denz, TE-MPE-CP Evian 2011.
CERN TE-MPE-EP, RD, 09-April Quench Protection Systems (QPS) for the LHC R. Denz, TE-MPE-EP Acknowledgements: K. Dahlerup-Petersen, A. Siemko, J.
TE-MPE-CP, RD, 28-Sep Problems with QPS DAQ Systems During LHC Operation, 1 st Results from 2010 CNRAD Tests R. Denz TE-MPE-CP.
MPE Workshop 14/12/2010 Post Mortem Project Status and Plans Arkadiusz Gorzawski (on behalf of the PMA team)
MPE LS1 workshop Summary Session 4 – Quench Detection R. Denz, D. E. Rasmussen.
LHC Post Mortem Workshop - 1, CERN, January 2007 (slide 1/52) AB-CO Measurement & Analysis Present status of the individual.
Main MPE Activities during YETS/EYETS/LS2 and the Provision of Resources Andrzej Siemko Andrzej Siemko TE-MPE1.
Artificial Intelligence In Power System Author Doshi Pratik H.Darakh Bharat P.
Chapter 19: Network Management
RF acceleration and transverse damper systems
Outcome of BI.DIS Fast Interlocks Peer Review
Data providers Volume & Type of Analysis Kickers
R. Denz, A. Gomez Alonso, AT-MEL-PM
nan FIP The project April Eva. Gousiou BE/CO-HT
Ideas and design concepts, and challenges
Chapter 2: System Structures
the CERN Electrical network protection system
Magnet Safety System for NA61/Shine
Quench detection electronics for the HL-LHC magnet circuits of the LHC
Course: Module: Lesson # & Name Instructional Material 1 of 32 Lesson Delivery Mode: Lesson Duration: Document Name: 1. Professional Diploma in ERP Systems.
Chapter 13: I/O Systems.
R. Denz, TE-MPE-EP Acknowledgements: J. Steckert
Presentation transcript:

TE-MPE-EP, RD, 26-Aug Workshop on QPS Software Layer Hardware / Agents R. Denz, TE-MPE-EP

TE-MPE-EP, RD, 26-Aug QPS supervision - basic architecture nQPS IPD, IPQ, IT600 ALeads main circuitsEE QPS supervision provides data for: Operator screens (WinCC OA) and expert consoles Software interlocks (QPS_OK signal) LHC logging database Post mortem servers, viewers and automatic analysis Warning generation (SMS, ) for pre-defined faults states (e.g. loss of a quench heater power supply) QPS configuration management (LSA)

TE-MPE-EP, RD, 26-Aug QPS supervision user requirements over the years … 2001: Main scope is the supervision of QPS/EE detection and protection devices. The initially limited diagnostic capabilities for the superconducting circuits have been regarded as fully sufficient at the time. 2014: Main scope is now the enhanced diagnostics of the superconducting circuits and elements exceeding in many cases the capabilities of the magnet test benches by far (e.g. splices, quench heaters). The supervision of the significantly more complex detection systems required as well some substantial upgrades.

TE-MPE-EP, RD, 26-Aug QPS low level hardware (field-bus couplers)  Fieldbus (WorldFip™) controlled data acquisition system –“Classic” architecture combining a 8052 compatible μ-controller (ADuC831™) with the notorious MicroFip™ ASIC Developed at the beginning of the century –Synchronized to accelerator time (∆t = 1 ms) via the field-bus Absolute time stamping done by the field device –Up to 8 analog input channels and up to 80 digital I/O channels Latest QPS crate designs use dedicated circuit boards for the sampling of analogue signals (e.g. enhanced quench heater supervision) –Interfaces to associated equipment by local busses (SPI or I 2 C) Up to 24 active clients  2516 devices in LHC (~2300 exposed to ionizing radiation) –Required radiation tolerance restricts functionality of field devices e.g. for data buffering & processing

TE-MPE-EP, RD, 26-Aug QPS low level hardware – device firmware  Core of the field-bus coupler is the 8052 compatible ADuC831™ chip MHz with 12 clocks / instruction –Code is executed from up to 62 kBytes program flash/EE memory –Chip is sufficiently radiation tolerant for LHC operation (run 1 experience, tests)  Firmware written in C for 8052 using Keil™ compiler –OS free embedded C application –Synchronization to LHC controls and time by external interrupt created by MicroFip™ –PM event timestamping by polling the DAQ trigger lines with ∆t = 1 ms –Controller manages as well the local crate communication (as software master!) –Base application and corresponding API date from 2002 with many incremental upgrades over the years Based on the gained experience the code and the API could be simplified significantly Requires a major revision (also for gateway RT application) Preferably to be combined with possible hardware upgrade during LS3

TE-MPE-EP, RD, 26-Aug QPS low level hardware – field-bus  Fieldbus of the WorldFIP™ standard (1 Mbit/s) –Macro-cycle length: t macro = 100 ms for tunnel segments (agents type DQAMC, DQAMGS) t macro = 200 ms for underground areas (agents type DQAMG, DQAMS)  Clients are based on the MicroFIP™ ASIC (different versions and makes) –Only periodic communication, no messages –Consumed variables TIME (global) and COMMAND, 8 Bytes each –Produced variables DATA0, 1, 2, 3 24 Bytes each Maximum data transmission rate per client: 960 Byte/s –The modest amount of 96 Bytes may translate into 160 supervision signals!

TE-MPE-EP, RD, 26-Aug QPS low level hardware limitations  Rather limited size of MicroFip™ memory (120 Bytes) –Limits maximum amount of data transferable during one macro-cycle to 96 Bytes –Cannot transfer data from redundant devices within one cycle (independent of macro- cycle length)  must toggle between A/B  Limited number of hardware trigger lines –Typically one hardware DAQ trigger per physical controller –Can cause problems with PM time stamping, identification of trigger sources, independent operation of circuits, especially during commissioning Affects in particular the 600 A multi-circuit protection crates  Remote firmware upgrades –Controller hardware supports remote firmware upgrades in most of the cases Implementation would require however a new controller firmware and a major upgrade of the gateway RT application (e.g. allowing on the fly change of macro cycle etc.) –Remote upgrade of detection board firmware only supported by some hardware ( SPI/I 2 C programmable devices only, no JTAG chain) Rarely needed, potential safety issue The original hardware design dating back to 2002 revealed during the many years of operation some limitations Not all of the constraints can be overcome easily and some may require major upgrades

TE-MPE-EP, RD, 26-Aug QPS low level hardware – R2E related problems  Radiation tolerance of the MicroFip™ –In some cases the device gets stuck by radiation induced effects  loss of field-bus communication  power cycle for recovering Automatic for the MB supervision systems (LS1 upgrade) By remote power cycle for the 600 A supervision installed in the RR By access for the rest 12 events in 2012 (9 x DQAMCMB, 3 x DQAMCMQ) Upgrade for MQ supervision in preparation (could be deployed during TS)  Stalled DAQ triggers due to SEU for DQAMCMB and DQAMCMQ controllers –“ISO150 problem” mitigated by firmware (199 events in 2012) The blocked DAQ trigger can be remotely unlatched (DQAMCMB only) Creates a “fake” PM, interpreted by the analysis tools as failed heater discharge As the controller communicates this particular faults state the analysis tools should take notice

TE-MPE-EP, RD, 26-Aug QPS low level hardware – upgrades during LS1 I  Re-configuration of QPS field-bus networks (for details see Julien’s talk) –Eases possible upgrades as less agents per segment –Doubles the data transmission rate in the LHC tunnel segments Original user request for reducing transmission time of large PM data blocks, especially for the enhanced quench heater supervision  now everybody wants it …  Visibility of redundant circuit boards –Simultaneous transmission during normal operation (=logging) not feasible due to field- bus limitations  A/B toggling required Some issues in high level supervision still to be fixed (loss of QPS_OK) –Successfully implemented for PM data  Transfer of analogue data to logging database –Suppression of on change data recording, differentiation between slow and fast data –Essential basis for many tools like signal integrity checkers etc. –Very successful, smooth and absolutely essential upgrade!

TE-MPE-EP, RD, 26-Aug QPS low level hardware – upgrades during LS1 II  Remote recovery of stalled communication busses –Implemented for SPI bus based systems (MB and MQ protection) Allows bus recovery without power cycle or access Meanwhile integrated into Swiss tool To be extended to I 2 C based systems –As the bus recovery process does not create triggers the implementation of a fully automatic transparent solution can be envisaged  Introduction of configuration management –Additional level of system security by checking systematically system parameters Critical QPS parameters are already very well protected by the detection system firmware –Implementation starting currently with main circuits followed by IPQ … –Automatic change/correction of device parameters not yet implemented No urgent need for the time being Could be used in some cases to ease system maintenance

TE-MPE-EP, RD, 26-Aug QPS low level hardware – future developments  Complete LS1 upgrades –Concerns 600 A systems in the RR, can profit to fix some PM issues –Complete implementation of configuration management  Additional test benches allowing system level tests –Simulation of complete field-bus segments, tests of complete software layer etc. –To be installed in the new TE-MPE test areas in 272  Additional hardware DAQ triggers for IPQ and 600 A systems –Technically solved, implementation depends only on available specialist resources  Radiation exposed systems (tunnel + RR73/RR77) –Keep CERNFip as standard field-bus –Migrate from QPS clients μFIP to NanoFip Requires complete new development due to lack of backward compatibility Some QPS functionality to be transferred to RT application  Systems installed in radiation free areas (LHC) –Possible change to modern field-bus, e.g. Ethercat™

TE-MPE-EP, RD, 26-Aug Summary & conclusions  QPS supervision is an essential part of an important machine protection system and the key to the diagnostics of the superconducting circuits –QPS supervision low level hardware is based custom made electronics taking into account the rather specific constraints of the LHC tunnel (radiation, access, EMC …)  Upgrades of the low level hardware and the corresponding firmware are tedious, require specialist knowledge and must be planned carefully and well ahead in time –Future developments will try minimizing the data processing in the field and move functionality to the gateways  Further evolution of RT application needs to be discussed –The current application is working very well but the LS1 experience and future upgrades of the QPS supervision ask for changes Future developments will require better control of the application code by a team of experts familiar with the QPS field constraints Effective data processing prior to the transfer to the end user application can solve the majority of the problems encountered during the LS1 hardware commissioning –The QPS team prefers clearly insourcing the RT application in the medium term