Download presentation
Presentation is loading. Please wait.
Published byBonnie Daniel Modified over 9 years ago
1
ITER Control System Technology Study Klemen Žagar klemen.zagar@cosylab.com
2
Overview About ITER ITER Control and Data Acquisition System (CODAC) architecture Communication technologies for the Plant Operation Network Use cases/requirements Performance benchmark EPICS Collaboration Meeting, Vancouver, April 2009 2
3
A Note! Information about ITER and CODAC architecture presented here-in is a summary of ITER Organization’s presentations Cosylab prepared studies on communication technologies for ITER 3 EPICS Collaboration Meeting, Vancouver, April 2009
4
4 About ITER (International Thermonuclear Experimental Reactor) EPICS Collaboration Meeting, Vancouver, April 2009
5
5 Toroidal Field Coil Nb 3 Sn, 18, wedged Central Solenoid Nb 3 Sn, 6 modules Poloidal Field Coil Nb-Ti, 6 Vacuum Vessel 9 sectors Port Plug heating/current drive, test blankets limiters/RH diagnostics Cryostat 24 m high x 28 m dia. Blanket 440 modules Torus Cryopumps, 8 Major plasma radius 6.2 m Plasma Volume: 840 m 3 Plasma Current: 15 MA Typical Density: 10 20 m -3 Typical Temperature: 20 keV Fusion Power: 500 MW Machine mass: 23350 t (cryostat + VV + magnets) - shielding, divertor and manifolds: 7945 t + 1060 port plugs - magnet systems: 10150 t; cryostat: 820 t Divertor 54 cassettes 29m ~28m About ITER EPICS Collaboration Meeting, Vancouver, April 2009
6
6 CODAC Architecture EPICS Collaboration Meeting, Vancouver, April 2009
7
Plant Operation Network (PON) Command Invocation Data Streaming Event Handling Monitoring Bulk Data Transfer PON self-diagnostics Diagnosing problems in the PON Monitoring the load of the PON network Process Control Reacting on events in the control system by issuing commands or transmitting other events Alarm Handling Transmission of notification of anomalous behavior Management of currently active alarm states 7 EPICS Collaboration Meeting, Vancouver, April 2009
8
Prototype and Benchmarking We have measured latency and throughput in a controlled test environment Allows side-by-side comparison Also, hands-on experience is more comparable Latency test: Where a central service is involved (OmniNotify, IceStorm or EPICS/CA): Send a message (to the central service) Upon receipt on the sender node, measure difference between send and receive times Without a central service (OmniORB, ICE, RTI DDS): Round-trip test Send a message (to the receiving node) Respond Upon receipt of the response, measure the difference Throughput test: Send messages as fast as possible Measure differences between receive times Statistical analysis to obtain average, jitter, minimum, 95th percentile, etc. 8
9
Applicability to Use Cases 9 CHANNEL ACCESS omniORB CORBA RTI DDSZeroC ICE Command invocation4/25 /54/35/5 Event handling4/34/45/44/5 Monitoring5/5 (EPICS)5/5 (TANGO)5/3 Bulk data transfer5/34/45/44/4 Diagnostics5453 Process control5 (EPICS)5 (TANGO)43 Alarm handling5 (EPICS)5 (TANGO)33 First number: performance Second number: functional applicability of the use case 1. not applicable at all 2. applicable, but at a significant performance/quality cost compared to optimal solution; custom design required 3. applicable, but at some performance/quality cost compared to optimal solution; custom design required 4. applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design 5. applicable, and close to optimal solution; use case foreseen in design
10
Applicability to Use Cases 10 CHANNEL ACCESS omniORB CORBA RTI DDSZeroC ICE Command invocation4/25 /54/35/5 Event handling4/34/45/44/5 Monitoring5/5 (EPICS)5/5 (TANGO)5/3 Bulk data transfer5/34/45/44/4 Diagnostics5453 Process control5 (EPICS)5 (TANGO)43 Alarm handling5 (EPICS)5 (TANGO)33 First number: performance Second number: functional applicability of the use case 1. not applicable at all 2. applicable, but at a significant performance/quality cost compared to optimal solution; custom design required 3. applicable, but at some performance/quality cost compared to optimal solution; custom design required 4. applicable, but at some performance/quality cost compared to optimal solution; foreseen in existing design 5. applicable, and close to optimal solution; use case foreseen in design
11
PON Latency (small payloads) 11 EPICS Collaboration Meeting, Vancouver, April 2009
12
PON Latency (small payloads) 12 EPICS Collaboration Meeting, Vancouver, April 2009 Ranking: 1.OmniORB (one way invocations) 2.ICE (one way invocations) 3.RTI DDS (not tuned for latency) 4.EPICS 5.OmniNotify 6.ICE storm
13
PON Throughput 13 EPICS Collaboration Meeting, Vancouver, April 2009
14
PON Throughput 14 EPICS Collaboration Meeting, Vancouver, April 2009 Ranking: 1.RTI DDS 2.OmniORB (one way invocations) 3.ICE (one way invocations) 4.EPICS 5.ICE storm 6.OmniNotify
15
PON Scalability RTI DDS efficiently leverages IP multicasting (source: RTI) 15 EPICS Collaboration Meeting, Vancouver, April 2009 With technologies that do not use IP multicasting/broadcasting, per-subscriber throughput is inversely proportional to the number of subscribers! (source: RTI)
16
EPICS Ultimately, ITER Organization has chosen EPICS: Very good performance. Easiest to work with. Very robust. Full-blown control system infrastructure (not just middleware). Likely to be around for a while (widely used by many labs). Where EPICS could improve? Use IP multicasting for monitors. A remote procedure call layer (e.g., “abuse” waveforms to transmit data serialized with with Google Protocol Buffers, or use PVData in EPICSv4). 16 EPICS Collaboration Meeting, Vancouver, April 2009
17
17 Thank You for Your Attention
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.