RT2010, Lisboa Portugal, May 28, 2009 Page 1 Baseline architecture of ITER control system Anders Wallander, Franck Di Maio, Jean-Yves Journeaux, Wolf-Dieter.

Slides:



Advertisements
Similar presentations
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Advertisements

ITER Fast Controller Prototype Feng Wang, Shi Li and Xiaoyang Sun Institute of Plasma Physics, Chinese Academy of Sciences 4/15/20151 The Spring 2010 EPICS.
2. Computer Clusters for Scalable Parallel Computing
Digital RF Stabilization System Based on MicroTCA Technology - Libera LLRF Robert Černe May 2010, RT10, Lisboa
Page 1 ITER Control System – meeting – date © 2013, ITER Organization The Control System of ITER Overview, status and PS integration CODAC team ITER IO.
Test of LLRF at SPARC Marco Bellaveglia INFN – LNF Reporting for:
A U.S. Department of Energy Office of Science Laboratory Operated by The University of Chicago Argonne National Laboratory Office of Science U.S. Department.
1 Soft Timers: Efficient Microsecond Software Timer Support For Network Processing Mohit Aron and Peter Druschel Rice University Presented By Jonathan.
Chapter 13 Embedded Systems
Using FPGAs with Embedded Processors for Complete Hardware and Software Systems Jonah Weber May 2, 2006.
Building an Application Server for Home Network based on Android Platform Yi-hsien Liao Supervised by : Dr. Chao-huang Wei Department of Electrical Engineering.
Diagnostics and Controls K. Gajewski ESS Spoke RF Source Accelerator Internal Review.
EPICS Collaboration meeting, Pohang,, Oct 2012 Page 1IDM UID: 97W6QN Status of ITER Core Software (“CODAC Core System”) CHD/CIT/CODAC ITER Organization.
Measuring zSeries System Performance Dr. Chu J. Jong School of Information Technology Illinois State University 06/11/2012 Sponsored in part by Deer &
7th Workshop on Fusion Data Processing Validation and Analysis Integration of GPU Technologies in EPICs for Real Time Data Preprocessing Applications J.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Requirements for ITER CODAC
Synchronous Device Interface at NSLS-II Yuke Tian Control Group, NSLS-II, BNL (May 1, 2009 EPICS Collaboration Meeting, Vancouver)
CODAC Core System, 2-June-2010, EPICS Collaboration Meeting Aix-en-Provence Page 1 CODAC Core System F. Di Maio ITER IO / CHD / CIT / CODAC.
ITER Control Simulator by Martynas Prokopas 1
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
EPICS Collaboration Meeting Spring 2010, Aix France, Jun 2, 2010 Page 1 ITER CODAC COntrol, Data Access and Communication System for ITER Anders Wallander.
NCSX NCSX Preliminary Design Review ‒ October 7-9, 2003 G. Oliaro 1 G. Oliaro - WBS 5 Central Instrumentation/Data Acquisition and Controls Princeton Plasma.
ITER – Interlocks Luis Fernandez December 2014 Central Interlock System CIS v0.
IMPLEMENTATION OF SOFTWARE INPUT OUTPUT CONTROLLERS FOR THE STAR EXPERIMENT J. M. Burns, M. Cherney*, J. Fujita* Creighton University, Department of Physics,
A Comparative Study of the Linux and Windows Device Driver Architectures with a focus on IEEE1394 (high speed serial bus) drivers Melekam Tsegaye
ASTA roadmap to EPICS and other upgrades Remote monitoring and control DAQ Machine protection Dark current energy spectrometer.
FAIR Accelerator Controls Strategy
Control in ATLAS TDAQ Dietrich Liko on behalf of the ATLAS TDAQ Group.
ATF Control System and Interface to sub-systems Nobuhiro Terunuma, KEK 21/Nov/2007.
Final Review of ITER PBS 45 CODAC – PART 1 – 14 th, 15 th and 16 th of January CadarachePage 1 FINAL DESIGN REVIEW OF ITER PBS 45 CODAC – PART 1.
Distributed Computing CSC 345 – Operating Systems By - Fure Unukpo 1 Saturday, April 26, 2014.
REAL-TIME SOFTWARE SYSTEMS DEVELOPMENT Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
SNS Integrated Control System Timing Clients at SNS DH Thompson Epics Spring 2003.
Issues in Accelerator Control Bob Dalesio, December 23, 2002.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Timing Requirements for Spallation Neutron Sources Timing system clock synchronized to the storage ring’s revolution frequency. –LANSCE: MHz.
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays - done for rsrv in 3.14 Channel access priorities - planned to.
ITER Update, 11-Oct-2010, EPICS Collaboration Meeting Brookhaven Page 1 ITER Update F. Di Maio ITER IO / CHD / CIT / CODAC.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
EPICS Collaboration Meeting, 05-Oct-2011, Willingen Page 1 ITER Tools Franck Di Maio, Lana Abadie CHD/CSD/CODAC ITER Organization.
The recent history and current state of the linac control system Tom Himel Dec 1,
EPICS Release 3.15 Bob Dalesio May 19, Features for 3.15 Support for large arrays Channel access priorities Portable server replacement of rsrv.
CERN Timing Workshop, Geneva, 15 Feb Geneva, 15 Feb 2008 Franck Di Maio – ITER IO Geneva, 15 Feb 2008 Franck Di Maio – ITER IO CERN Timing Workshop.
DDRIII BASED GENERAL PURPOSE FIFO ON VIRTEX-6 FPGA ML605 BOARD PART B PRESENTATION STUDENTS: OLEG KORENEV EUGENE REZNIK SUPERVISOR: ROLF HILGENDORF 1 Semester:
Control System Considerations for ADS EuCARD-2/MAX Accelerators for Accelerator Driven Systems Workshop, CERN, March 20-21, 2014 Klemen Žagar Robert Modic.
Spring 2015 EPICS Collaboration Meeting, May 2015, FRIB East Lansing, MI, USA © 2015, ITER Organization Page 1 Disclaimer: The views and opinions.
PLC based Interlock Workshop CIS Team February 2016 ITER Central Interlock System Fast Interlock Controller.
ITER & CODAC Core System Status Update
Combining safety and conventional interfaces for interlock PLCs
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
M. Bellato INFN Padova and U. Marconi INFN Bologna
Status of I&C System Development for ITER Diagnostic Systems in Japan
Presented by Li Gang Accelerator Control Group
S. N. Simrock, A. Aallekar, L. Abadie, L. Bertalot, M. Cheon, C
Andrea Acquaviva, Luca Benini, Bruno Riccò
Current Status of ITER I&C System as Integration Begins
Electronics, Trigger and DAQ for SuperB
What is Fibre Channel? What is Fibre Channel? Introduction
ITER Instrumentation and Control - Status and Plans
CS 31006: Computer Networks – The Routers
Status of Fast Controller EPICS Supports for ITER Project
F. Di Maio ITER IO / CHD / CIT / CODAC
COntrol, Data Access and Communication System for ITER
Characteristics of Reconfigurable Hardware
CS 501: Software Engineering Fall 1999
EPICS: Experimental Physics and Industrial Control System
Chapter 13: I/O Systems.
Presentation transcript:

RT2010, Lisboa Portugal, May 28, 2009 Page 1 Baseline architecture of ITER control system Anders Wallander, Franck Di Maio, Jean-Yves Journeaux, Wolf-Dieter Klotz, Petri Makijarvi, Izuru Yonekawa ITER Organization (IO) St. Paul lez Durance, France

RT2010, Lisboa Portugal, May 28, 2009 Page 2 Basics Goal: Demonstrate feasibility of fusion as an energy source (Q=10 means output power equal 10 times input power) Schedule: 10 years construction phase First plasma 2019, first D-T plasma 2027 Collaboration: CN, EU, IN, JA, KO, RF, US

RT2010, Lisboa Portugal, May 28, 2009 Page 3 This is ITER

RT2010, Lisboa Portugal, May 28, 2009 Page 4 This is the ITER Agreement 140 slices

RT2010, Lisboa Portugal, May 28, 2009 Page 5 A bit of interface problems

RT2010, Lisboa Portugal, May 28, 2009 Page 6 Island mentality

RT2010, Lisboa Portugal, May 28, 2009 Page 7 Missing Items

RT2010, Lisboa Portugal, May 28, 2009 Page 8 The control system can help to fix this

RT2010, Lisboa Portugal, May 28, 2009 Page 9 The control system is horizontal

RT2010, Lisboa Portugal, May 28, 2009 Page 10 it connects to everything

RT2010, Lisboa Portugal, May 28, 2009 Page 11 it identifies and may eliminate missing items

RT2010, Lisboa Portugal, May 28, 2009 Page 12 it integrates

RT2010, Lisboa Portugal, May 28, 2009 Page 13 and is the primary tool for operation

RT2010, Lisboa Portugal, May 28, 2009 Page 14 But this will only work if… …all these links work Standards Architecture

RT2010, Lisboa Portugal, May 28, 2009 Page 15 Finite set of “Lego blocks”, which can be selected and connected as required

RT2010, Lisboa Portugal, May 28, 2009 Page 16 Plant System I&C Is a deliverable by ITER member state. Set of standard components selected from catalogue. One and only one plant system host.

RT2010, Lisboa Portugal, May 28, 2009 Page 17 ITER Subsystem is a set of related plant system I&C

RT2010, Lisboa Portugal, May 28, 2009 Page 18 Plant Operation Network is the work horse general purpose flat network utilizing industrial managed switches and mainstream IT technology

RT2010, Lisboa Portugal, May 28, 2009 Page 19 Plant System Host is an IO furnished hardware and software component installed in a Plant System I&C cubicle. There is one and only one PSH in a Plant System I&C. PSH runs RHEL (Red Hat Enterprise Linux) and EPICS (Experimental Physics and Industrial Control System) soft IOC (Input Output Controller). It provides standard functions like maintaining (monitoring and controlling) the Common Operation State (COS) of the Plant System. PSH is fully data driven, i.e. it is customized for a particular Plant System I&C by configuration. There is no plant specific code in a PSH. PSH has no I/O.

RT2010, Lisboa Portugal, May 28, 2009 Page 20 Fast Controller is a dedicated industrial controller implemented in PCI family form factor and PCIe and Ethernet communication fabric. There may be zero, one or many Fast Controllers in a Plant System I&C. A Fast Controller runs RHEL and EPICS IOC. It acts as a channel access server and exposes process variables (PV) to PON. A Fast Controller has normally I/O and IO supports a set of standard I/O modules with associated EPICS drivers. A Fast Controller may have interface to High Performance Networks (HPN), i.e. SDN for plasma control and TCN for absolute time and programmed triggers and clocks. Fast Controllers involved in critical real-time runs a RT enabled (TBD) version of Linux on a separate core or CPU. A Fast Controller can have plant specific logic. A Fast Controller can act as supervisor for other Fast Controllers and/or Slow Controllers. The supervisor maintains Plant System Operating State.

RT2010, Lisboa Portugal, May 28, 2009 Page 21 High Performance Computer are dedicated computers (multi core, GPU) running plasma control algorithms.

RT2010, Lisboa Portugal, May 28, 2009 Page 22 High Performance Networks are physically dedicated networks to implement functions not achievable by the conventional Plant Operation Network. These functions are distributed real-time feedback control, high accuracy time synchronization and bulk video distribution.

RT2010, Lisboa Portugal, May 28, 2009 Page 23 ITER subsystem# of PS I&C# of PSH+controllers# of servers+terminals Tokamak6556 Cryo and cooling water5403 Magnets and coil power supply8303 Building and power37663 Fuelling and vacuum6453 Heating8554 Remote handling2152 Hot cell and environment3202 Test blanket6247 Diagnostics Central00170 TOTAL Estimate of system size ~1000 computers connected to PON

RT2010, Lisboa Portugal, May 28, 2009 Page 24 Timing System It is common practice by large experimental facilities to invent their own home made timing system and we want to avoid that We believe that IEEE (PTP v2) provides a COTS alternative fulfilling ITER requirements IEEE 1588, 2008, provides 50 ns RMS synchronization accuracy of absolute time over Ethernet with possibility to program triggers and clocks synchronized with this absolute time using COTS This standard is being endorsed by more and more suppliers and we will see many new COTS products in the future This also provides an evolution path to White Rabbit being developed by CERN Therefore we have baselined IEEE for TCN and will confirm this decision by further evaluations in 2 nd half of 2010 Main requirements: 50 ns RMS absolute time synchronization (off-line correlation of diagnostics)

RT2010, Lisboa Portugal, May 28, 2009 Page 25 Distributed real-time plasma control ITER distributed plasma control main characteristics decoupling and separation of concern data driven multiple in multiple out (MIMO) non intrusive probing flexibility scalability simulation support minimize latency and jitter Two schools of thoughts for real-time network Reflective memory Ethernet based (e.g. UDP, RTnet) Decision on technology delayed while watching the market Further test beds and evaluations in 2011 Main requirements: control cycles Hz-kHz, peak bandwidth 25 MB/s, number of nodes participating

RT2010, Lisboa Portugal, May 28, 2009 Page 26 Conclusions The non-technical peculiarities of the ITER project has been addressed Components making up ITER control system have been defined and a baseline architecture outlined Flexibility in combining these standard components have been emphasized Having a set of standard components and a sound architecture will ease integration Issues on timing and feedback control have been touched We intend to continue working with our partners all over the world to make the ITER control system contribute to ITER success.