Real Time Experiment Control in a Tokamak fusion device Technical aspects and new Developments F. Sartori The content of this presentation should be considered.

Slides:



Advertisements
Similar presentations
Data Communications and Networking
Advertisements

CESG, Fall 2011, 5 th November 2011 Stuart Fowell, SciSys Device Virtualisation and Electronic Data Sheets.
Technology Drivers Traditional HPC application drivers – OS noise, resource monitoring and management, memory footprint – Complexity of resources to be.
MotoHawk Training Model-Based Design of Embedded Systems.
Fault Detection in a HW/SW CoDesign Environment Prepared by A. Gaye Soykök.
CS 345 Computer System Overview
Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Chapter 13 Embedded Systems
1 Complexity of Network Synchronization Raeda Naamnieh.
© 2007 Pearson Education Inc., Upper Saddle River, NJ. All rights reserved.1 Computer Networks and Internets with Internet Applications, 4e By Douglas.
W4118 Operating Systems OS Overview Junfeng Yang.
INTRODUCTION OS/2 was initially designed to extend the capabilities of DOS by IBM and Microsoft Corporations. To create a single industry-standard operating.
Chapter 13 Embedded Systems
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Basic Input/Output Operations
A General approach to MPLS Path Protection using Segments Ashish Gupta Ashish Gupta.
Copyright Arshi Khan1 System Programming Instructor Arshi Khan.
SET TOP BOX What is set-top box ? An interactive device which integrates the video and audio decoding capabilities of television with a multimedia application.
Distributed Control Systems Emad Ali Chemical Engineering Department King SAUD University.
INPUT-OUTPUT ORGANIZATION
Network Topologies.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 18 Slide 1 Software Reuse 2.
EtherCAT Protocol Implementation Issues on an Embedded Linux Platform
Computer Organization
Networking Virtualization Using FPGAs Russell Tessier, Deepak Unnikrishnan, Dong Yin, and Lixin Gao Reconfigurable Computing Group Department of Electrical.
Operating System A program that controls the execution of application programs An interface between applications and hardware 1.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
Eric Keller, Evan Green Princeton University PRESTO /22/08 Virtualizing the Data Plane Through Source Code Merging.
Consorzio RFX – Padova, February 7 th 2011 RFX-mod Programme Workshop 2011 – Padova, February 7 th 2011 RFX-mod Programme Workshop 2011 RFX-mod Feedback.
(More) Interfacing concepts. Introduction Overview of I/O operations Programmed I/O – Standard I/O – Memory Mapped I/O Device synchronization Readings:
ATCA based LLRF system design review DESY Control servers for ATCA based LLRF system Piotr Pucyk - DESY, Warsaw University of Technology Jaroslaw.
Architectural Design lecture 10. Topics covered Architectural design decisions System organisation Control styles Reference architectures.
Topics of presentation
© 2012 xtUML.org Bill Chown – Mentor Graphics Model Driven Engineering.
Real time control Logical architecture Discussion of logical architecture of real time control systems M.Jonker.
The european ITM Task Force data structure F. Imbeaux.
Controls-related R&D options Etienne CARLIER 18 th ABTEF meeting
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
Processes Introduction to Operating Systems: Module 3.
Chapter 1 Computer Abstractions and Technology. Chapter 1 — Computer Abstractions and Technology — 2 The Computer Revolution Progress in computer technology.
4/19/20021 TCPSplitter: A Reconfigurable Hardware Based TCP Flow Monitor David V. Schuehler.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Winter 2007SEG2101 Chapter 111 Chapter 11 Implementation Design.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Chapter 5 Input/Output 5.1 Principles of I/O hardware
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
CCSDS SOIS Working Group Meeting – Berlin, Germany 14th of October 2008 Prototyping of CCSDS SOIS services on 1553 Bus Sev Gunes-Lasnet, Olivier Notebaert.
1ICALEPCS, October 15-19, 2007, Knoxville, Tennessee Association Euratom-Cea Ph. Moreau Association EURATOM-CEA Département de Recherches sur la Fusion.
Lecture 4 Page 1 CS 111 Online Modularity and Virtualization CS 111 On-Line MS Program Operating Systems Peter Reiher.
IT3002 Computer Architecture
Microsoft Cloud Solution.  What is the cloud?  Windows Azure  What services does it offer?  How does it all work?  How to go about using it  Further.
بسم الله الرحمن الرحيم MEMORY AND I/O.
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
Embedded System Design and Development Introduction to Embedded System.
Software Design and Architecture
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 10Slide 1 Chapter 5:Architectural Design l Establishing the overall structure of a software.
XFEL The European X-Ray Laser Project X-Ray Free-Electron Laser Wojciech Jalmuzna, Technical University of Lodz, Department of Microelectronics and Computer.
Multi-Device UI Development for Task-Continuous Cross-Channel Web Applications Enes Yigitbas, Thomas Kern, Patrick Urban, Stefan Sauer
Deterministic Communication with SpaceWire
Introduction Data Communication Networks Protocols and Standards
Maintaining Data Integrity in Programmable Logic in Atmospheric Environments through Error Detection Joel Seely Technical Marketing Manager Military &
Web Server Administration
Operating Systems Chapter 5: Input/Output Management
Systems Analysis and Design in a Changing World, 6th Edition
Analysis models and design models
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
Lecture Topics: 11/1 Hand back midterms
Presentation transcript:

Real Time Experiment Control in a Tokamak fusion device Technical aspects and new Developments F. Sartori The content of this presentation should be considered my personal opinion, not of the organisations I work for.

Summary Real Time Experiment Control (RTEC) – A bit of history Requirements (JET perspective) – Network requirements – CODAS requirements – Control Node requirements Real Time control node – Design & Implementation challenges

RT Experiment Control - 1 First Tokamak devices only required local control systems: – Control of the PF/TF currents to prescribed values, with the exception of the radial plasma control that required the feedback from a handful of magnetics. Elongated machines required the introduction of a Vertical Stabilisation in order to keep the plasma column vertically inside the vessel. – Analogue control system still adequate.

RT Experiment Control - 2 Modern tokamaks with a flexible plasma shape control capability – Digital Real Time Plasma shape reconstruction – Digital Real Time Plasma shape control. More recently the requirement to achieve steady state fusion parameters drives the need for real time control of many plasma parameters – Real Time Elaboration of internal diagnostics is now needed – Additional Heating and Fuelling become actuators – Real Time Network is introduced to allow communication among measurements, actuators and Controllers – The concept of central controller is introduced

RT Experiment Control - 3 A modern tokamak is evolving from a classic physics experiment: – Prepare the target conditions (plasma) – Perform the experiment (add beam/gas…) – Acquire the data To something more akin to a flying machine – Navigate towards the fusion performance conditions – Keep the conditions by rejecting the disturbances – Manage the emergency landing in all conditions ITER has very challenging Real Time Experiment Control requirements!

Requirements (JET perspective) The current JET Architecture is the result of 15+ years of development driven by the evolution of the experimental needs. The details of JET implementation choices do not necessarily present the audience with information useful for future development. I will highlight the complex requirements that have driven and are still driving JET developments and that are the most important result of many years of experience.

Requirements – Distributed Distributed = the necessary I/O and processing can hardly be implemented on a single system. Why? RTEC involves virtually every tokamak subsystem (SS) – The SS are physically distributed over a wide area. Production of RT elaborated diagnostics data needs to happen locally to the diagnostic system – Many raw signals are needed (> 1000) – In some cases processing of very high frequency data is required to produce an useful signal. RT management of actuator references needs to happen locally. – Complex systems like additional heating need to process the reference with its own plant status information. – Limits and other generic local constraints need to be taken in account

Requirements – Separated Separated: certain control/measurement/actuation functions are better implemented using distinct hardware systems. Why? The overall RTEC should be robust against the failure of a single element. – Full redundancy is the ideal target – Localized and intelligent management of most probable failures is a must Addition/Removal of a system shall be possible without the need for full re-commissioning – Obviously functionalities associated with this system will be affected. Large difference in I/O or processing requirements Difference in impact to the machine safety Different level of required availability Some functions can be properly designed and tested against reliable models, other are of more experimental nature.

Requirements – Infrastructure Distributed/Separated RTEC  A RT communication infrastructure is necessary. What are its requirements? It should be able to handle up to 1000 signals It should allow any-node to any-other node communication It should allow easy expandability: add a new node It should cover the delay/sampling frequency needs of every global control loop – Not necessarily of all RT systems It should guarantee the source and the destination of a given piece of information It should provide effective isolation among nodes – A node HW & SW faults should not affect other nodes: only data it produces may be affected It should allow effective fault detection – Transmission/Transport/Reception faults – Source timing or non sending The only cost-effective solution to all these requirements is a Switch based Digital Network.

JET RT Network In JET we eventually settled on 155Mbit ATM AAL5 as the Real Time network. But what are the features of this network that are most important? AAL5 VCIs (virtual circuits) allow implementing a point to multipoint network where source and destinations are set on the switch programming.  minimise traffic, guarantees data source, we can add a new node with minimal/no re-testing of the network. Cell size=53bytes  low delay The information is sent asynchronously as soon as ready  low worst case delay Only <10% of available bandwidth is used  low delay. <30 packets, packet size <~100 numbers = 400 bytes, max frequency 500Hz Why not Multicast UDP (or RTP) on Gigabit Ethernet ? We are now investigating ATM alternatives. Note also that JET RT network requirements are different from those presently chosen by ITER design  Synchronous Data Bus.

Requirements – CODAS - 1 The control network does not exist in isolation. All its nodes and switches are part of CODAS. What requirements does the network place on CODAS? Nodes and Switches are Local Units (Level3) – They are managed by the Subsystems (Level2) relative to the actuator/diagnostic served – They respond to parameter-setting/count-down/data collection interfaces Nodes and Switches configuration should be managed as part of the plant information (Level1). The overall RT network coherency needs to be managed here. – Data format & content coherence between producer and consumers of RT information shall be managed at this level

RT Node and CODAS

Requirements – CODAS - 2 CODAS shall provide a mechanism for testing of a network configuration – Should be possible to enable artificial signal generators or plant simulators so that they allow testing outside a real experiment. CODAS shall provide a mean for local testing of any control loop. – This means providing a sort of “virtual subsystem” that allows interoperation of parts of different subsystems without invoking a full JET pulse. – This feature should allow parallel and asynchronous testing of different branches of the network.

JET CODAS & RT JET CODAS provides most of the required features. But the rigidness of the Level1-2-3 architecture is sometimes at odds with the flexibility of the RT network. – Integrated commissioning of RT nodes belonging to different SS is very difficult if not impossible. Management of the network packet format or of the network topology is not yet on Level1. – This makes it difficult to safely reconfigure the network topology for instance to implement closed loop tests on simulators. Level1 pulse parameter management (pulse schedule editor) is evolving rapidly to cater for RT needs  an ever growing flexibility  more and more parameters. – The user interface has evolved to offer mechanisms to manage the complexity. The user can choose to deal with the right level of complexity depending on his skills or his requirements.

Requirements – Node The RT node is either a – a diagnostic elaboration, – an actuator manager, – an intelligent diagnostic (combines different diagnostics ), – a controller, – a plasma protection system (machine limit or plasma limit avoidance)

Requirements – Node What are its main requirements? Real Time behaviour – Guarantee a maximum computation delay – Guarantee a fixed data production rate Reliability – Operate with a failure rate adequate for the importance of its task. – Failure should be more probable during pulse preparation. Diagnostic&Actuators: Functionality Availability – Some diagnostic systems and some plasma actuators are only available in certain plasma conditions. – In order to be usable as part of a certain control loop, each node must be able to provide is functionality within a certain range of plasma/tokamak conditions. Controllers: Contribute to the overall machine safety task – Avoid hitting limits – Validate I/O – Implement redundant strategies

JET RT Nodes : Implementation What are the design guidelines followed in the most recent developments? Support the model based design approach Minimise the commissioning activities with and without plasma. Machine time is the scarcest resource. Re-use hardware and software solutions Minimise risk of HW or SW faults Minimise risk in the real-time process – implement the simplest RT algorithms – shift as much as possible the computational burden to the parameter processing Each Control System should help reducing the effect of failures in the RTEC – Do not rely on the quality of the other RT systems: as much as possible validate inputs – Do not rely on protection systems: as much as possible avoid hitting machine or plasma limits.

JET RT Nodes : HW implementation How are they typically implemented? Most common solution: – VME + PowerPC (VxWorks) +VME I/O +ATM Some of the smart diagnostic nodes (only ATM I/O) – PC running Windows NT4 – STD Linux + RT task support Recent developments – ATCA I/O with PCIe infrastructure  PC + RTAI linux

New VS ATCA HW bits 2Mhz ADCs. Differential inputs Galvanically isolated inputs Digital filtering to 20kHz. Effective number of bits = 18 DMA transfer to PC memory. Jitter <.5us Controller based on standard PC Processor: Quad core CORE2 2.5GHz

RT Node SW : concepts RT software is typically built around a framework. Different SW frameworks are presently used at JET. Here I am described that used by the main RT control systems. These are its most important characteristics. It allows rapid implementation of a new RT node by dividing the SW into scriptable components. – Most components are reused in all systems Components to implement plant control interfaces. – Thanks to this feature the framework has found use outside JET Components for RT functions – I/O modules implementing interface to ADCs DACs RT-network... – Standard function modules: data acquisition, RT display... – Standard computation modules: State Space, Wave gen.... – Application custom modules which are the only new developments It allows multiple parallel interacting RT activities. – Either synchronised to HW timing or to another activity Multi platform: Linux, RTAI, VxWorks, Windows, Solaris – RT code modules can be simulated outside target hardware