COntrol, Data Access and Communication System for ITER

Slides:



Advertisements
Similar presentations
ITER CODAC Plant Control Design Handbook October 2008
Advertisements

ITER Fast Controller Prototype Feng Wang, Shi Li and Xiaoyang Sun Institute of Plasma Physics, Chinese Academy of Sciences 4/15/20151 The Spring 2010 EPICS.
This plan covers The design, implementation, installation, integration and commissioning of the Central Instrumentation & Control System (CODAC, CIS and.
Digital RF Stabilization System Based on MicroTCA Technology - Libera LLRF Robert Černe May 2010, RT10, Lisboa
1 ITER Standardization for Integration of local and central interlocks Riccardo Pedica PLC Based Interlock systems Workshop 4-5 December 2014 – ITER Organization.
Page 1 ITER Control System – meeting – date © 2013, ITER Organization The Control System of ITER Overview, status and PS integration CODAC team ITER IO.
PLC Based Interlock systems Workshop 4-5 December ITER Organization Headquarters - St Paul-lez-Durance-France ITER Central Interlock System: Central.
RT2010, Lisboa Portugal, May 28, 2009 Page 1 Baseline architecture of ITER control system Anders Wallander, Franck Di Maio, Jean-Yves Journeaux, Wolf-Dieter.
Using FPGAs with Embedded Processors for Complete Hardware and Software Systems Jonah Weber May 2, 2006.
EPICS on TPS RF System Yu-Hang Lin Radio Frequency Group NSRRC.
Building an Application Server for Home Network based on Android Platform Yi-hsien Liao Supervised by : Dr. Chao-huang Wei Department of Electrical Engineering.
Diagnostics and Controls K. Gajewski ESS Spoke RF Source Accelerator Internal Review.
TRTS Team-6))EET-EL-5 15/5/ – Ibrahim Hamdy 219 – Salman Hassan About : PLC ( LOGO ) With Mr.Klaus Koevener.
EPICS Collaboration meeting, Pohang,, Oct 2012 Page 1IDM UID: 97W6QN Status of ITER Core Software (“CODAC Core System”) CHD/CIT/CODAC ITER Organization.
Agenda Adaptation of existing open-source control systems from compact accelerators to large scale facilities.
7th Workshop on Fusion Data Processing Validation and Analysis Integration of GPU Technologies in EPICs for Real Time Data Preprocessing Applications J.
Requirements for ITER CODAC
CODAC Core System, 2-June-2010, EPICS Collaboration Meeting Aix-en-Provence Page 1 CODAC Core System F. Di Maio ITER IO / CHD / CIT / CODAC.
ITER Control Simulator by Martynas Prokopas 1
EPICS Collaboration Meeting Spring 2010, Aix France, Jun 2, 2010 Page 1 ITER CODAC COntrol, Data Access and Communication System for ITER Anders Wallander.
NCSX NCSX Preliminary Design Review ‒ October 7-9, 2003 G. Oliaro 1 G. Oliaro - WBS 5 Central Instrumentation/Data Acquisition and Controls Princeton Plasma.
ITER – Interlocks Luis Fernandez December 2014 Central Interlock System CIS v0.
IMPLEMENTATION OF SOFTWARE INPUT OUTPUT CONTROLLERS FOR THE STAR EXPERIMENT J. M. Burns, M. Cherney*, J. Fujita* Creighton University, Department of Physics,
CLS Machine Protection and PLC Hardware Presentation to Beamlines Group Elder Matias Canadian Light Source University of Saskatchewan October 20, 2001.
FAIR Accelerator Controls Strategy
ATF Control System and Interface to sub-systems Nobuhiro Terunuma, KEK 21/Nov/2007.
Final Review of ITER PBS 45 CODAC – PART 1 – 14 th, 15 th and 16 th of January CadarachePage 1 FINAL DESIGN REVIEW OF ITER PBS 45 CODAC – PART 1.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays - done for rsrv in 3.14 Channel access priorities - planned to.
ITER Update, 11-Oct-2010, EPICS Collaboration Meeting Brookhaven Page 1 ITER Update F. Di Maio ITER IO / CHD / CIT / CODAC.
EPICS Collaboration Meeting, 05-Oct-2011, Willingen Page 1 ITER Tools Franck Di Maio, Lana Abadie CHD/CSD/CODAC ITER Organization.
The recent history and current state of the linac control system Tom Himel Dec 1,
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays Channel access priorities Portable server replacement of rsrv.
CERN Timing Workshop, Geneva, 15 Feb Geneva, 15 Feb 2008 Franck Di Maio – ITER IO Geneva, 15 Feb 2008 Franck Di Maio – ITER IO CERN Timing Workshop.
CEA DSM Irfu SIS LDISC 18/04/2012 Paul Lotrus 1 Control Command Overview GBAR Collaboration Meeting Paul Lotrus CEA/DSM/Irfu/SIS.
Spring 2015 EPICS Collaboration Meeting, May 2015, FRIB East Lansing, MI, USA © 2015, ITER Organization Page 1 Disclaimer: The views and opinions.
JLab Accelerator Controls Matt Bickley MaRIE discussion April 26, 2016.
ESS (vacuum) control system Daniel Piso Controls Division February 20, 2013.
PLC based Interlock Workshop CIS Team February 2016 ITER Central Interlock System Fast Interlock Controller.
Programmable Logic Controller (PLC) - Hardware and Software Lecture 12.
ITER & CODAC Core System Status Update
Combining safety and conventional interfaces for interlock PLCs
Latency and Communication Challenges in Automated Manufacturing
Methodologies and Algorithms
Software Overview Sonja Vrcic
Status of I&C System Development for ITER Diagnostic Systems in Japan
Presented by Li Gang Accelerator Control Group
Current Generation Hypervisor Type 1 Type 2.
ECE354 Embedded Systems Introduction C Andras Moritz.
ATF/ATF2 Control System
Current Status of ITER I&C System as Integration Begins
DT80 range Modbus capability
How SCADA Systems Work?.
Magnet Safety System for NA61/Shine
ITER Instrumentation and Control - Status and Plans
Programmable Logic Controllers (PLCs) An Overview.
CS 31006: Computer Networks – The Routers
Status of Fast Controller EPICS Supports for ITER Project
F. Di Maio ITER IO / CHD / CIT / CODAC
Characteristics of Reconfigurable Hardware
PLC Hardware Components.
CS 501: Software Engineering Fall 1999
Five Key Computer Components
Plc & scada applications
PLCs integration into the ICS
EPICS: Experimental Physics and Industrial Control System
ESS Main Control Room & ICS Infrastructure
Vacuum Control System for Monolith Vacuum
ASYN based S7 PLC Driver Jignesh Patel
OPC UA and EPICS Introduction
Presentation transcript:

COntrol, Data Access and Communication System for ITER ITER CODAC COntrol, Data Access and Communication System for ITER Anders Wallander ITER Organization (IO) 13067 St. Paul lez Durance, France

Basics Plasma is an ionized hot gas The fourth state of matter Lawson criterion (“triple product”) Temperature * Density * Time > Big Number

ITER Goal: Demonstrate feasibility of fusion as an energy source Metric: Q=10 sustained for 300-500 seconds (Q=10 means output power equal 10 times input power) Schedule: 10 years construction phase just started First plasma 2019, first D-T plasma 2027 Collaboration: CN, EU, IN, JA, KO, RF, US Method: Magnetic confinement (Tokamak)

This is ITER

This is the ITER Agreement 140 slices

A bit of interface problems

Island mentality

Missing Items

The control system can help to fix this

The control system is horizontal

it connects to everything

it identifies and may eliminate missing items

it integrates

and is the primary tool for operation

But this will only work if… …all these links work Standards Architecture

EPICS In February 2009 ITER Organization decided to use EPICS for the control system This decision was based on three independent studies In February 2010 ITER Organization released the first version of CODAC Core System, which basically is a package of selected EPICS products

Finite set of “Lego blocks”, which can be selected and connected as required

Plant System I&C Is a deliverable by ITER member state. Set of standard components selected from catalogue. One and only one plant system host.

ITER Subsystem is a set of related plant system I&C

CODAC Servers and Terminals are servers running Red Hat Enterprise Linux (RHEL) and EPICS/CSS/???. These servers implements supervision, monitoring, coordination, configuration, automation, data handling, archiving, visualization, HMI…

Plant Operation Network is the work horse general purpose flat network utilizing industrial managed switches and mainstream IT technology

Plant System Host is an IO furnished hardware and software component installed in a Plant System I&C cubicle. There is one and only one PSH in a Plant System I&C. PSH runs RHEL (Red Hat Enterprise Linux) and EPICS (Experimental Physics and Industrial Control System) soft IOC (Input Output Controller). It provides standard functions like maintaining (monitoring and controlling) the Common Operation State (COS) of the Plant System. PSH is fully data driven, i.e. it is customized for a particular Plant System I&C by configuration. There is no plant specific code in a PSH. PSH has no I/O.

Slow Controller is a Siemens Simatic S7 industrial automation Programmable Logic Controller (PLC). There may be zero, one or many Slow Controllers in a Plant System I&C. A Slow Controller runs software and plant specific logic programmed on STEP 7 and interfaces to either PSH or a Fast Controller using IO furnished interface (EPICS driver and self description). A Slow Controller has normally I/O and IO supports a set of standard I/O modules. A Slow Controller has no interface to HPN. A Slow Controller synchronizes its time using NTP over PON. A Slow Controller can act as supervisor for other Slow Controllers.

Fast Controller is a dedicated industrial controller implemented in PCI family form factor and PCIe and Ethernet communication fabric. There may be zero, one or many Fast Controllers in a Plant System I&C. A Fast Controller runs RHEL and EPICS IOC. It acts as a channel access server and exposes process variables (PV) to PON. A Fast Controller has normally I/O and IO supports a set of standard I/O modules with associated EPICS drivers. A Fast Controller may have interface to High Performance Networks (HPN), i.e. SDN for plasma control and TCN for absolute time and programmed triggers and clocks. Fast Controllers involved in critical real-time runs a RT enabled (TBD) version of Linux on a separate core or CPU. A Fast Controller can have plant specific logic. A Fast Controller can act as supervisor for other Fast Controllers and/or Slow Controllers. The supervisor maintains Plant System Operating State.

High Performance Computer are dedicated computers (multi core, GPU) running plasma control algorithms.

High Performance Networks are physically dedicated networks to implement functions not achievable by the conventional Plant Operation Network. These functions are distributed real-time feedback control, high accuracy time synchronization and bulk video distribution.

Estimate of system size ITER subsystem # of PS I&C # of PSH+controllers # of servers+terminals Tokamak 6 55 Cryo and cooling water 5 40 3 Magnets and coil power supply 8 30 Building and power 37 66 Fuelling and vacuum 45 Heating 4 Remote handling 2 15 Hot cell and environment 20 Test blanket 24 7 Diagnostics 89 400 Central 170 TOTAL 167 750 220 ~1000 computers connected to PON

Conclusions The non-technical peculiarities of the ITER project has been addressed Components making up ITER control system have been defined and a baseline architecture outlined Flexibility in combining these standard components have been emphasized Having a set of standard components and a sound architecture will ease integration We would like to thank the EPICS community allowing us to get started very quickly on the ITER control system We promise to pay back to the community in the future with the development of ITER control system and an enlargement of the community https://www.iter.org/org/team/chd/cid/codac/Pages/