ILC Control and IHEP Activity Jijiu. Zhao, Gang. Li IHEP, Nov.5~7,2007 CCAST ILC Accelerator Workshop and 1st Asia ILC R&D Seminar under JSPS Core-University.

Slides:



Advertisements
Similar presentations
Data Storage Solutions Module 1.2. Data Storage Solutions Upon completion of this module, you will be able to: List the common storage media and solutions.
Advertisements

Page 1 Dorado 400 Series Server Club Page 2 First member of the Dorado family based on the Next Generation architecture Employs Intel 64 Xeon Dual.
Argonne National Laboratory is managed by The University of Chicago for the U.S. Department of Energy ILC Controls Requirements Claude Saunders.
Accelerator control at iThemba LABS. Some background No formal reliability procedures Cost considerations SSC operational 24/7 Shutdown total of 2 months/year.
A U.S. Department of Energy Office of Science Laboratory Operated by The University of Chicago Argonne National Laboratory Office of Science U.S. Department.
Network Management Overview IACT 918 July 2004 Gene Awyzio SITACS University of Wollongong.
Copyright 2009 FUJITSU TECHNOLOGY SOLUTIONS PRIMERGY Servers and Windows Server® 2008 R2 Benefit from an efficient, high performance and flexible platform.
Lesson 11-Virtual Private Networks. Overview Define Virtual Private Networks (VPNs). Deploy User VPNs. Deploy Site VPNs. Understand standard VPN techniques.
Chapter 9: Moving to Design
Commonwealth of Massachusetts Statewide Strategic IT Consolidation (ITC) Initiative ITD Virtualization and Shared Services Executive Briefing Presentation.
Distributed Control Systems Emad Ali Chemical Engineering Department King SAUD University.
Chapter 9 Elements of Systems Design
STRATEGIES INVOLVED IN REMOTE COMPUTATION
SNS Integrated Control System EPICS Collaboration Meeting SNS Machine Protection System SNS Timing System Coles Sibley xxxx/vlb.
Experience of Developing BEPCII Control System Jijiu ZHAO IHEP, Beijing October 18, 2007.
NCSX NCSX Preliminary Design Review ‒ October 7-9, 2003 G. Oliaro 1 G. Oliaro - WBS 5 Central Instrumentation/Data Acquisition and Controls Princeton Plasma.
Topics of presentation
Advanced Computer Networks Topic 2: Characterization of Distributed Systems.
Controls & LLRF ILC Controls Telecon August 17, 2006 John Carwardine RDR Report Writing.
International Linear Collider The ILC is the worldwide consensus for the next major new facility. One year ago, the choice was made between the two alternate.
FAIR Accelerator Controls Strategy
ATF Control System and Interface to sub-systems Nobuhiro Terunuma, KEK 21/Nov/2007.
SLAC ILC High Availability Electronics R&D LCWS IISc Bangalore India Ray Larsen, SLAC Presented by S. Dhawan, Yale University.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
CLIC Implementation Studies Ph. Lebrun & J. Osborne CERN CLIC Collaboration Meeting addressing the Work Packages CERN, 3-4 November 2011.
Operations, Test facilities, CF&S Tom Himel SLAC.
Project Management Mark Palmer Cornell Laboratory for Accelerator-Based Sciences and Education.
Eugenia Hatziangeli Beams Department Controls Group CERN, Accelerators and Technology Sector E.Hatziangeli - CERN-Greece Industry day, Athens 31st March.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
9 Systems Analysis and Design in a Changing World, Fourth Edition.
1 Global Design Effort Beijing GDE Meeting, February 2007 Controls for Linac Parallel Session 2/6/07 John Carwardine ANL.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
VMware vSphere Configuration and Management v6
NICA control system, beam diagnostics V.Andreev, E.Gorbachev, A.Kirichenko, D. Monakhov, S. Romanov, G.Sedykh, T. Rukoyatkina, V.Volkov VBLHEP, JINR, Dubna.
1 Global Design Effort: Control System Vancouver GDE Meeting, July 2006 Controls Global System Review John Carwardine, ANL (For Controls Global Group Team)
Managed by UT-Battelle for the Department of Energy SCL Vacuum Control System Upgrade Derrick Williams
1 Global Design Effort: Controls & LLRF Controls & LLRF Working Group: Tuesday Session (29 May 07) John Carwardine Kay Rehlich.
1 Global Design Effort Beijing GDE Meeting, February 2007 Global Controls: RDR to EDR John Carwardine For Controls Global Group.
Data Communications and Networks Chapter 9 – Distributed Systems ICT-BVF8.1- Data Communications and Network Trainer: Dr. Abbes Sebihi.
Connecting LabVIEW to EPICS network
1 The ILC Control Work Packages. ILC Control System Work Packages GDE Oct Who We Are Collaboration loosely formed at Snowmass which included SLAC,
Global Design Effort: Controls & LLRF Americas Region Team WBS x.2 Global Systems Program Overview for FY08/09.
11 th February 2008Brian Martlew EPICS for MICE Status of the MICE slow control system Brian Martlew STFC, Daresbury Laboratory.
AB/CO Review, Interlock team, 20 th September Interlock team – the AB/CO point of view M.Zerlauth, R.Harrison Powering Interlocks A common task.
Control System Considerations for ADS EuCARD-2/MAX Accelerators for Accelerator Driven Systems Workshop, CERN, March 20-21, 2014 Klemen Žagar Robert Modic.
Project X RD&D Plan Controls Jim Patrick AAC Meeting February 3, 2009.
January 9, 2006 Margaret Votava 1 ILC – NI/FNAL/ANL Brief overview of Global Design Effort (GDE) plans, dates, and organization: –Changes since Industrial.
1 What’s Happening on the ILC Controls Front at FNAL Margaret Votava Fermilab January 9th, 2006.
XFEL The European X-Ray Laser Project X-Ray Free-Electron Laser Wojciech Jalmuzna, Technical University of Lodz, Department of Microelectronics and Computer.
1 Global Control System J. Carwardine (ANL) 6 November, 2007.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
Fermilab Control System Jim Patrick - AD/Controls MaRIE Meeting March 9, 2016.
9 Systems Analysis and Design in a Changing World, Fifth Edition.
© 2010 VMware Inc. All rights reserved Why Virtualize? Beng-Hong Lim, VMware, Inc.
WIR SCHAFFEN WISSEN – HEUTE FÜR MORGEN Steps towards beam commissioning: Low Level RF system group Thomas Schilcher :: Paul Scherrer Institut SwissFEL.
ICS interfaces Timo Korhonen ICS Apr 22, 2015.
Redundancy in the Control System of DESY’s Cryogenic Facility. M. Bieler, M. Clausen, J. Penning, B. Schoeneburg, DESY ARW 2013, Melbourne,
Control System Tools for Beam Commissioning Timo Korhonen Controls Division Chief Engineer April 8, 2014.
SEMINAR ON.  OVERVIEW -  What is Cloud Computing???  Amazon Elastic Cloud Computing (Amazon EC2)  Amazon EC2 Core Concept  How to use Amazon EC2.
XFEL The European X-Ray Laser Project X-Ray Free-Electron Laser Wojciech Jalmuzna, Technical University of Lodz, Department of Microelectronics and Computer.
RDR Controls Design Walk- Through Controls and LLRF EDR Kick-Off Meeting August 20-22, 2007.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Accelerator control at iThemba LABS
Presented by Li Gang Accelerator Control Group
The ILC Control Work Packages
QNX Technology Overview
CEPC RF Power Sources System
RF System (HLRF, LLRF, Controls) EDR Plan Overview
Chapter-1 Computer is an advanced electronic device that takes raw data as an input from the user and processes it under the control of a set of instructions.
Presentation transcript:

ILC Control and IHEP Activity Jijiu. Zhao, Gang. Li IHEP, Nov.5~7,2007 CCAST ILC Accelerator Workshop and 1st Asia ILC R&D Seminar under JSPS Core-University Program

Introduction The International Linear Collider (ILC) is a 200- to 500-GeV center-of-mass high-luminosity linear electron-positron collider, based on 1.3-GHz superconducting radio frequency accelerating cavities. The machine operates at a pulse repetition rate of 5-Hz, with each 1-ms beam pulse comprising ~3000 bunches.

The control system overall design is evolving as details of the accelerator technical design are developed. The Control System Reference Design serves these purposes: –Establish a functional and physical model for costing purposes –Establish a starting point for engineering design and R&D efforts –Communicate our vision of the control system Introduction (cont’)

Scope of Controls Computing Infrastructure –Computer Center –Business Computing –Computing Networks –Desktop Support –Engineering Support –Computer Security –Management Controls System –Central Computers –On Site Control Room –Controls Services Operator Interface Automation Logging databases Data Archival Alarms Diagnostics Interfaces to Technical Systems –Front Ends Hardware software –Cabling ATCA High Availability LLRF Controls Beam Feedback System Protection Systems Machine Protection Personnel Protection Beam Containment Network Infrastructure Assembly and Testing of Controls Racks

Requirements of Controls High Availability –Controls System allocation 2500 hours MTBF (Mean Time Before Failure) 5 hours MTTR (Mean Time To Repair) 15 hours downtime per year CS availability – 99% to 99.9% Each system % available –Standardization –Diagnostic Layer Scalability –~100,000 devices should be controlled, millions of control points Automation –Sequencing, automatic startup, tuning, etc. –Slow and Fast Beam Based Feedback Timing and Synchronization –Precision RF Phase References –0.1% amplitude & 0.1 degree phase stability Remote Operation –Enable Collaborators to participate more fully

Functional Model Client Tier GUIs Scripting Services Tier “Business Logic” Device abstraction Feedback engine State machines Online models … Front-End Tier Technical Systems Interfaces Control-point level

Client Tier: –Provides applications with which people directly interact. Applications range from engineer-oriented control consoles to high-level physics control applications to system configuration management. –Engineer-oriented consoles are focused on the operation of the underlying accelerator equipment. –High-level physics applications require a blend of services that combine data from the front-end tier and supporting data from the relational database in the context of high-level device abstractions (e.g, magnets, BPMs) Functional Model (cont’)

Services Tier: –Provides services that coordinate many activities while providing a well-defined set of public interfaces (non-graphical). –Device abstractions such as magnets and BPMs that incorporate engineering, physics, and control models are represented in this tier. –This makes it possible to relate high-level machine parameters with low-level equipment settings in a standard way. For example, a parameter save/restore service can prevent two clients from simultaneously attempting to restore a common subset of operational parameters. –This centralization of controls system provides many benefits in terms of coordination, conflict avoidance, security, and optimization. Functional Model (cont’)

Front-end Tier: –Provides access to the field I/O and underlying dedicated fast feedback systems. –This tier is configured and managed by the services tier, but can run autonomously. –For example, the services tier may configure a feedback loop in the front-end tier, but the loop itself runs without direct involvement. –The primary abstraction in this tier is a channel, or process variable, roughly equivalent to a single I/O point. Functional Model (cont’)

Physical Model (global layer)

Physical Model (front-end field I/O)

The ILC control system must reliably interact with more than 100,000 technical system devices that could collectively amount to several million scalar and vector Process Variables (PVs) distributed across the many kilometers of beam lines and facilities at the ILC site. Information must be processed and distributed on a variety of timescales from microseconds to several seconds. The overall philosophy is to develop an architecture that can meet the requirements, while leveraging the cost savings and rapid evolutionary advancements of commercial off-the-shelf (COTS) components. Physical Model (cont’)

Data collection, issuing and acting on setpoints, and pulse- to pulse feedback algorithms are all synchronized to the pulse repetition rate. The controls network must be designed to ensure adequate response and determinism to support this pulse-to-pulse synchronous operation, which in turn requires prescribing compliance criteria for any device attached to this network. Additionally, large data sources must be prudently managed to avoid network saturation. Dedicated compute nodes associated with each backbone network switch service for monitoring, data reduction, and implementing feedback algorithms. Network Infrastructure

BCD Control Room Cluster Architecture Network Infrastructure (cont’)

Several accelerator facilities are used as the part of the ILC test facilities in the wide world: –LLRF: Fermilab, DESY, KEK, SNS, LBNL, U.Penn and others –ATCA: DESY is developing a version of their LLRF Simcon board on ATCA, and several other institutions worldwide are beginning to explore ATCA, such as SLAC. –Beam Instrumentation: Fermilab, SLAC, KEK, DESY, U.Oxford, U. London and others. Goal: –The ILC Control work is highly collaborative, and several work packages overlap between institutions. –Researching and Working on the facility to assess the cost and solve the key technique of ILC Controls. ILC test facility and collaboration

ATCA (Advanced Telecom Computing Architecture) is chosen as the Electronics platform of ILC controls. –Unique open standard designed specifically for (5-9’s) availability at the crate level –Core components available from industry Crates with intelligent platform management of power, module type & ID, load shedding & re-routing N+1 redundancy options for core controllers, communication, power, cooling fans All serial multi-gigabit communications by wire for short distance or fiber for long distance Controllers, switches and high performance processors – Ideal for core of controls system IHEP Activity

IHEP Activity (cont’)

Now, IHEP Control Group get budget support. Plan to set up a prototype of ATCA at the lab. Do some work at ATCA platform as follows: –Study performance of ATCA including shelf- manager, redundancy switcher and power supply, etc. –Install the EPICS into ATCA system. –Research and test the EPICS HA at ATCA. –etc. IHEP Activity (cont’)

Research field:  ATCA HA  Xen technology  Linux HA  EPICS HA Fig. the prototype of the ATCA IHEP Activity (cont’) Shelf Management Dual Xeon processors Switch

Xen Technology XenSource company A open source virtual machine monitor (VMM), or hypervisor, for the x86 processor architecture Can securely execute multiple virtual machines on a single physical system with close-to-native performance Live migration of running virtual machines between physical hosts Xend: a daemon responsible for managing virtual machines and providing access to their consoles IHEP Activity (cont’)

Linux HA Putting together a group of computers that trust each other to provide a service even when system components fail When one machine goes down, others take over its work This involves IP address takeover, service takeover, etc. New work comes to the “takeover” machine Not primarily designed for high performance It cannot achieve 100% availability – nothing can HA Clustering designed to recover from single faults IHEP Activity (cont’)

EPICS HA can be completed via two methods: one is Xen technology and the other is Linux HA. Fig1 EPICS HA structure via XenFig2 EPICS HA structure via Linux HA IHEP Activity (cont’)

Go on: Study EPICS HA via HA middleware: OpenClovis or Goahead. Fig3 EPICS HA structure via OpenClovis or Goahead IHEP Activity (cont’)

Conclusion: Accumulate more experience based on the prototype of the ATCA system. Hope researching results of ATCA be used into ILC test facilities as soon as possible and contribute to the assessment of ILC control system. Improve and strengthen the close cooperation with the other ILC control teams in the world. IHEP Activity (cont’)

Thanks for your attention! Some information of this report is referred to ILC control kick-off meeting, Aug 20~22,2007.