09/11/20061 Detector Control Systems A software implementation: Cern Framework + PVSS Niccolo’ Moggi and Stefano Zucchelli University and INFN Bologna.

Slides:



Advertisements
Similar presentations
JCOP FW Update ALICE DCS Workshop 6 th and 7 th October, 2005 Fernando Varela Rodriguez, IT-CO Outline Organization Current status Future work.
Advertisements

PVSS and JCOP Framework Organization, Support & News Oliver Holme IT-CO.
P.C. Burkimsher Alice DCS Workshop 18 March 2002 (Updated 27 March 2003) PVSS - How to get started Paul Burkimsher IT Division COntrols Group Support Services.
The Control System for the ATLAS Pixel Detector
Experiment Control Systems at the LHC An Overview of the System Architecture An Overview of the System Architecture JCOP Framework Overview JCOP Framework.
1 ALICE Detector Control System (DCS) TDR 28 January 2004 L.Jirdén On behalf of ALICE Controls Coordination (ACC): A.Augustinus, P.Chochula, G. De Cataldo,
Supervision of Production Computers in ALICE Peter Chochula for the ALICE DCS team.
Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Clara Gaspar, May 2010 The LHCb Run Control System An Integrated and Homogeneous Control System.
L. Granado Cardoso, F. Varela, N. Neufeld, C. Gaspar, C. Haen, CERN, Geneva, Switzerland D. Galli, INFN, Bologna, Italy ICALEPCS, October 2011.
Robert Gomez-Reino on behalf of PH-CMD CERN group.
CERN - IT Department CH-1211 Genève 23 Switzerland t The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva.
Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2012 Xavier Vilasis.
Using PVSS for the control of the LHCb TELL1 detector emulator (OPG) P. Petrova, M. Laverne, M. Muecke, G. Haefeli, J. Christiansen CERN European Organization.
Towards a Detector Control System for the ATLAS Pixeldetector Susanne Kersten, University of Wuppertal Pixel2002, Carmel September 2002 Overview of the.
Scaling Up PVSS Phase II. 2 Purpose of this talk Start a discussion about the next phase of the Scaling Up PVSS Project. Start a discussion about the.
SCADA Systems - What is the scope of this talk? What are SCADA systems? What are their structure and main features? How open are they? How are they evolving?
1. The Experiment CMS 2. Architecture, what is DCS in CMS? 3. Joint controls project 4. SCADA system (supervisory controls and data acquisition) 5. Framework.
SCADA. 3-Oct-15 Contents.. Introduction Hardware Architecture Software Architecture Functionality Conclusion References.
MDT PS DCS for ATLAS Eleni Mountricha
9th September 2001R. BARILLERE - IT-CO1 Industrial and Custom Front-End solutions for Process Controls.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Clara Gaspar, October 2011 The LHCb Experiment Control System: On the path to full automation.
S.Sergeev (JINR). Tracker consists of  4 chambers of 4 views each In total ~7200 drift tubes (~450 per view) To be controlled/monitored  HV system.
Update on Database Issues Peter Chochula DCS Workshop, June 21, 2004 Colmar.
XXVI Workshop on Recent Developments in High Energy Physics and Cosmology Theodoros Argyropoulos NTUA DCS group Ancient Olympia 2008 ATLAS Cathode Strip.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
The Joint COntrols Project Framework Manuel Gonzalez Berges on behalf of the JCOP FW Team.
Topics of presentation
André Augustinus 17 June 2002 Technology Overview What is out there to fulfil our requirements? (with thanks to Tarek)
Database Architectures Database System Architectures Considerations – Data storage: Where do the data and DBMS reside? – Processing: Where.
Wayne Salter, CERN The LHC Experiments’ Joint Controls Project Framework ALICE DCS Day 10 th September 2001.
20th September 2004ALICE DCS Meeting1 Overview FW News PVSS News PVSS Scaling Up News Front-end News Questions.
Controls EN-ICE Finite States Machines An introduction Marco Boccioli FSM model(s) of detector control 26 th April 2011.
André Augustinus 10 March 2003 DCS Workshop Detector Controls Layout Introduction.
JCOP Review, March 2003 D.R.Myers, IT-CO1 JCOP Review 2003 Architecture.
André Augustinus 16 September 2002 PVSS & Framework How to get started.
Peter Chochula ALICE Offline Week, October 04,2005 External access to the ALICE DCS archives.
Status of Farm Monitor and Control CERN, February 24, 2005 Gianluca Peco, INFN Bologna.
JCOP/DCS Plans for HW Upgrade A. Augustinus (ALICE), C
Controls EN-ICE FSM for dummies (…w/ all my respects) 15 th Jan 09.
Alice DCS workshop S.Popescu ISEG Crate controller + HV modules ISEG HV modules 12 Can bus PVSS OPC Client 1 Generic OPC Client Iseg OPC.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
UF/PNPI POWER SUPPLY SOFTWARE By Magnus Hov Lieng
European Organization for Nuclear Research DCS remote control in NA62: Equipment & Control Integration. Mathias Dutour NA62 Collaboration 9th December.
Clara Gaspar, April 2006 LHCb Experiment Control System Scope, Status & Worries.
The DCS Databases Peter Chochula. 31/05/2005Peter Chochula 2 Outline PVSS basics (boring topic but useful if one wants to understand the DCS data flow)
S.Sergeev (JINR). Tracker consists of  4 stations of 4 views (planes) each In total ~7200 drift tubes (~450 per view) To be controlled/monitored 
E Ethernet C CAN bus P Profibus HV HV cables LV LV cables (+busbar) S Serial (RS232) Signal cable Other/Unknown Liquid or Gas Cable and/or Bus PCI-XYZ.
Maria del Carmen Barandela Pazos CERN CHEP 2-7 Sep 2007 Victoria LHCb Online Interface to the Conditions Database.
JCOP Framework and PVSS News ALICE DCS Workshop 14 th March, 2006 Piotr Golonka CERN IT/CO-BE Outline PVSS status Framework: Current status and future.
20OCT2009Calo Piquet Training Session - Xvc1 ECS Overview Piquet Training Session Cuvée 2009 Xavier Vilasis.
C. Kiesling, 11th B2GM PXD Session, KEK, March , Slow Control System for the PXD PXD Support Systems & Control UNICOS Standard PVSS User Interface.
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
- My application works like a dream…does it. -No prob, MOON is here. F
PVSS an industrial tool for slow control
SCADA Selection and Usage at CERN
ATLAS MDT HV – LV Detector Control System (DCS)
CMS – The Detector Control System
The Client/Server Database Environment
Common components for OPC-UA developments at CERN: An enhanced OPC-UA toolkit Ben Farnham.
JCOP Review Closeout March 13, 2003 Chip Watson, Robin Lauckner,
by Prasad Mane (05IT6012) School of Information Technology
The LHCb Run Control System
Philippe Vannerem CERN / EP ICALEPCS - Oct03
Experiment Control System
Pierluigi Paolucci & Giovanni Polese
Tools for the Automation of large distributed control systems
Pierluigi Paolucci & Giovanni Polese
Presentation transcript:

09/11/20061 Detector Control Systems A software implementation: Cern Framework + PVSS Niccolo’ Moggi and Stefano Zucchelli University and INFN Bologna

09/11/20062 Hardware and Software Supervision Process Management Field Management Technologies Experimental equipment LAN WAN Storage Other systems (LHC, Safety,...) Configuration DB, Archives, Log files, etc. Controller/ PLC VME Field Bus LAN Node Layer Structure Sensors/devices Field buses & Nodes PLC/UNICOS OPC Communication Protocols SCADA VME/SLiC DIM FSM Commercial Custom Logically any DCS has a 3 layers hardware structure: supervision -> control -> field

09/11/20063 Requirements: architecture Client/Server architecture with hardware abstraction layers –Servers execute tasks and provide data independently of the clients Hierarchical mechanism (tree structure) –FSM (finite state machines): “nodes” with 1 parent and many children  easy partitioning  distributable system, possible decentralized decision making and error recovery

09/11/20064 Requirements: implementation –reliability –flexibility, expandability –low cost –short development time –ease of use (developers and users) –documentation/support

09/11/20065 CERN choice: JCOP + PVSS What it is PVSS: –Commercial software by ETM (Austrian company) –SCADA (supervisory control and data acquisition) Run-time DB, archiving, logging, trending Alarm generation and handling Device access (OPC/DIM, drivers) and control User data processing (C-style scripting language) Graphical editor for user interface What is JCOP (Joint COntrol Project) –CERN developed a framework for PVSS Simple interface to PVSS Implements hierarchy (FSM) Provides drivers for most common HEP devices Many utilities (eg: graphics)

09/11/20066 JCOP + PVSS SCADA (PVSS) JCOP FW (Supervisory) Supervisory Application FSM, DB, Web,etc. Supervision Front-end PLC UNICOS FW FE Application Communication PC (Windows, Linux) Other Systems Commercial Devices Device Driver FE Application OPC, PVSS Comms, DIM, DIP OPC, PVSS Comms, DIM, DIP …

09/11/20067 JCOP + PVSS User Framework PVSS Commercial Supervisory Application Event Manager Data Manager OPC client OPC server FW Custom FE Device Editor Navigator Controls Hierarchy C/C++ code GEH EAH External System FW Custom FE Devices User specific FE Equipment

09/11/20068 JCOP features Provides a complete component set for: –CAEN HV power supplies –Wiener crates and LV power supplies –Other power supplies (ISEG, ELMB) –Generic devices to connect analog or digital I/Os “complete” means: –any necessary OPC/DIM servers; –Device modeling (mapping of PVSS data-points to device values) –scripts, libraries and panels to configure and operate the device. Other tools to integrate user’s devices

09/11/20069 PVSS II features Distributed architecture: several processes for different tasks (“managers”) run separately and communicate internally via TCP/IP Managers subscribe to data (“subscribe on change” mode) Event manager is the heart of the system Device oriented (abstraction of complex devices) Devices are modeled using “Data-Points”: –All data relative to a device is grouped together and may be reached hierarchically in C++ style (eg: crate.board.channel.vMon)

09/11/ PVSS II system architecture User Interface Layer Processing Layer Device driver Layer Ctrl API EV D D D UIMUIMUIM DM Communication and Memory Layer managers

09/11/ PVSS: how it was chosen CERN chose PVSS in 1999 among >40 products based on a set of “objective” criteria: –Scalability –Cross-platform –Archiving and trending ability –Remote access –Security –Alarm handling –Extensibility and ease of use It’s been a long evaluation process (lots of tests) Much development has been done since then

09/11/ JCOP+PVSS advantages Scalability: practically no limits (see next page) Stability of kernel Flexibility (customization, easy integration of user functions) Win AND Linux (not OR) managers of the same system may run on different platforms may develop on one platform and run on the other only limitation: OPC client/server must run on Win By now is tried and tested (Compass, Harp, NA60, all 4 LHC) HV and LV are “Plug and Play” (drivers and modeling) Easy partitioning (commissioning and calibrations) Documentation (cern site, not ETM) Easy remote access on the Web through a web server –user’s GUI get downloaded by the remote browser –claim to support any security option (well…) Very flexible alarm handling scheme DB: proprietary or Oracle Redundancy (double system with automatic switchover) Some nice safety features if system is overloaded

09/11/ (follows) Scalability (Cern is) not aware of any built-in limit –As many managers as needed (all communications handled automatically) –Scattered system (one system running on many computers) –Distributed systems: multiple systems connected together and exchanging data CTRLAPI EV DDD UIMUIM DM CTRL API EV D D D UIMUIMUIM DM CTRL API EV D D D UIMUIMUIM DM DIST DIST DIS T Syst 1 Syst 3 Syst 2

09/11/ Dis-advantages COST !!! –have no idea: should investigate with ETM –complex licensing model: usually possible to negotiate special deals Maybe even too “big” ? Still need to develop part of the device drivers (but this is unavoidable) Support by ETM in the US ? Cern will not commit to any formal support but “this does not mean we will not help you if we could”

09/11/ Example: CAEN HV 1.Open PVSS, establish communication to crate (OPC)Select CAEN crate from the installation toolAdd crate and board(s) to the system configuration Now you already got a mapping between OPC items and PVSS data-points This is the kind of menu you can see your data-points with You only need to configure your device settings, alarms, plots, etc.

09/11/ Performance Report tests made at Cern in 2003/6 –Distributed 16-system in 3-level tree on 16 PC of various type, each system with ~40K DPE –Tested up to 260K DPE on one PC (non-realistic test 5M DPE on top PC of a 130-system in 5-level tree) –Total # of DPE not significant in subscribe-on-change mode. What overload a system is the # of changes/s P4 CPU:2.4GHz RAM:2Gb running EM + DM + 5 managers –saturate CPU at 1600 changes/s (500 ch/s = 35%CPU) –when moving EM outside saturates CPU at 2800 changes/s –alarm handling and archiving: saturate at ~700 alarms/s Dual-CPU and large RAM are well exploited With 400K DPE (includes FEB) –1% changes/s : 1+2 CPU tree –5% changes/s : 1+10 CPU tree

09/11/ Other possibilities EPICS: see Andrew Norman presentation iFix (by Intellution): Commercial Slow (see CDF experience) Fragile connections between nodes No drivers included Windows only Limit 100,000 channels ? LabView: No large systems A lot of other commercial software out there, but should be tested…