Mar 26, 2014 FGClite Power Control TE-EPC-CCE.

Slides:



Advertisements
Similar presentations
André Augustinus 15 March 2003 DCS Workshop Safety Interlocks.
Advertisements

CLIC Workshop 2009 Power Converters implications for CLIC machine protection 25/05/2015TE-EPC / S. Pittet1.
Slide 1Rod Production presented by Jim Lamb (UCSB)April 9, 2004 Rod Production Rod Production, April , presented by Jim Lamb (UCSB)
ATIF MEHMOOD MALIK KASHIF SIDDIQUE Improving dependability of Cloud Computing with Fault Tolerance and High Availability.
RADWG 2 ND TH MARCH 2012 G. Spiezia (EN/STI/ECE)
TE-MPE-EP, VF, 11-Oct-2012 Update on the DQLPU type A design and general progress. TE-MPE Technical Meeting.
Barriere O, Le Roux P
Drive beam magnets powering strategy Serge Pittet, Daniel Siemaszko CERN, Electronic Power Converter Group (TE-EPC) OUTLINE : Suggestion of.
Chamonix Risks due to UPS malfunctioning Impact on the Superconducting Circuit Protection System Hugues Thiesen Acknowledgments:K. Dahlerup-Petersen,
Status and planning of the CMX Wojtek Fedorko for the MSU group TDAQ Week, CERN April , 2012.
ICALEPCS 2005 Advanced uses of the WorldFIP fieldbus for diverse communications applications within the LHC power converter* control system Quentin King.
FGC Upgrades in the SPS V. Kain, S. Cettour Cave, S. Page, J.C. Bau, OP/SPS April
Nov 28, 2013 Power Converters Availability for post-LS1 LHC TE-EPC-CCE.
The FGClite Project Status Update and Requirements R2E Extended Project meeting TE-EPC-CC R2E Team.
Status of ITER collaboration for Machine Protection I. Romera On behalf of the colleagues who contribute to the project Thanks to: Sigrid, Markus, Rüdiger,
Computing system Lesson Objective: Understand what is meant by a ‘computer system’ Learning Outcome: Define the key words and give a brief explanation.
TE-MPE Annual Meeting Activity report of EM section 11/12/2015 Betty Magnin.
CHAPTER - 4 COMPUTER NETWORK Dr. BALAMURUGAN MUTHURAMAN
Last night Lhc closed 15:00 Beam on Ti 8 /Ti2 Prepulse check synchronisation ok with LHCb /RF 17:00 Open Point 4 PC’s and RF 19:00 LBDS energy tracking.
CERN Availability Working Group & Accelerator Fault Tracker Availability Working Group & Accelerator Fault Tracker - Where do we.
AB/CO Review, Interlock team, 20 th September Interlock team – the AB/CO point of view M.Zerlauth, R.Harrison Powering Interlocks A common task.
Machine Protection Review, R. Denz, 11-APR Introduction to Magnet Powering and Protection R. Denz, AT-MEL-PM.
CLIC Interlock System study: from Principle to Prototyping Patrice Nouvel TE-MPE-EP TE-MPE Technical Meeting : 22/03/2012.
FMEDA of the Fast Beam Interlock System Riccard Andersson Machine Protection Review
5-year operation experience with the 1.8 K refrigeration units of the LHC cryogenic system G. Ferlin, CERN Technology department Presented at CEC 2015.
PLCs at CERN for machine protection and access interlocks Session: Machine Protection and interlock systems at different labs I. Romera Ramírez (CERN /
February 4 th, Quality assurance plan: production, installation and commissioning N.Vauthier TE-CRG Technical Review on Beam Screen Heater Electronics.
Design process of the Interlock Systems Patrice Nouvel - CERN / Institut National Polytechnique de Toulouse CLIC Workshop Accelerator / Parameters.
CERN Converter Control Electronics Strategy for LHC Machine Electronics : Limitations & Risks
INTRODUCTION TO COMPUTER NETWORKS BY: SAIKUMAR III MSCS, Nalanda College.
Reliability and Performance of the SNS Machine Protection System Doug Curry 2013.
LHC Post Mortem Workshop - 1, CERN, January 2007 (slide 1/52) AB-CO Measurement & Analysis Present status of the individual.
Fabio Follin Delphine Jacquet For the LHC operation team
V4.
High energy beam transport.
Realising the SMP 1. Safe Machine Parameters Overview
nanoFIP 5th WorldFIP Insourcing Meeting Progress Report
LBDS TSU & AS-I failure report (Sept. 2016)
LKr status R. Fantechi.
UPS power distribution for LHC Beam Dumping System
Dependability Requirements of the LBDS and their Design Implications
Introduction to Assurance
LIU Wire Scanner Milestones June 2017
Installation of an Instrument on a CERN machine
nan FIP The project April Eva. Gousiou BE/CO-HT
The HL-LHC Circuits: Global View and Open Questions
CV PVSS project architecture
Outline Introduction to LHC Power Converters Remaining Inventory Observed Availability New inventory – FGClite Cumulative & SEE Effects Maintenance.
RF interlocks for higher intensities (LMC 15 June)
EC Activities Status for the LIU
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
6th WorldFIP Insourcing Meeting
PXD Summary Tests PXD9 Pilot Production ASICs Components Plans:
BE/RF-IS Contribution to LIU C. Rossi and M. Paoluzzi
MPE main activities planned for LS2
Renovation of the Accelerators Controls Infrastructure and its Assets Management Asset and Maintenance Management Workshop November 14th, 2013 Cl.Dehavay.
Renovation of the 45-year old PS magnets
Chapter 18: Introduction to Assurance
Collimator Control (SEUs & R2E Outlook)
Machine Protection Xu Hongliang.
Thursday 10th May - morning
Computer communications
Chapter 10 - X.25 and Network Management
MICROFIP-Do we need further tests? 28th June 2011
Pierluigi Paolucci & Giovanni Polese
Hall C SHMS & Configuration Status
Operation of Target Safety System (TSS)
Configuration of BLETC Module System Parameters Validation
LIU BWS Firmware status
Presentation transcript:

Mar 26, 2014 FGClite Power Control TE-EPC-CCE

CERN 2014 Outline 2 1.FGClite Introduction 2.Challenges 3.Power Control & Machine Protection Links 4.System Failure Modes 5.Power Cycle Procedure 6.Conclusions

CERN 2014 FGClite Introduction (1/4) Power converters

CERN 2014 FGClite Introduction (2/4) Technical Concept 4 Dedicated design methodology: 1.No microcontrollers/DSP as computationally intensive tasks moved to the gateway 2.Extensive radiation testing to select/validate all electronic components 3.Highly available Gateway/FGClite communication required

CERN 2014 FGClite Introduction (3/4) Project Timeline – Hardware design & component type testing Q3/2014 – 10 fully validated FGClite proof-of-concept modules Q3/2014 – Start of component batch testing using CHARM (PS East Area) Q2/2015 – Series production PrototypeFPGA Type TesterADC Type Tester

CERN 2014 FGClite Introduction (4/4) Installation 6 FGClite project planning FGClite /162016/17 FGC2 Increasing failure rateFGC2 not acceptable FGClite burn-in & installation Infant mortality failures FGClite operational FGClite FGC2 Number of FGCs Time First Beam & Physics 2015: FGC2 will be used after LS1 for the first months as its failure rate will be acceptable 2015/16: FGC2 will start failing due to increasing radiation levels thus FGClite will be installed 2016/17: Operational FGClite will exhibit much lower failure rate than FGC2 after infant mortality issues are addressed. ≈ 1700 units ≈650 units ≈1050 units

CERN 2014 FGClite Challenges Reliable communication infrastructure FGClite infrastructure (TE-EPC): 1. Robust FGClite hardware 2. Robust Gateway software and hardware World FIP (BE-CO): 1. FGClite/Gateway communication modules 2. Robust repeaters, cabling, etc… 7 Field Bus functional tests (BE-CO): High speed, high temp, long cables, 30 days Not sufficient statistics Lost packet probability <4.4×10 -9 /packet FGClite system <1.6×10 4 lost packets/year

CERN 2014 Power Control & Machine Protection Links FGClite system – links with machine protection 8 Power Cycle possibilities: 1.FD_REQ: Field Drive Request  Field Bus no carrier 2.NF_REQ: NanoFIP Request  Requested by Gateway via Field Bus 3.FL_REQ: FGClite Request  Requested by FGClite 4.PB_REQ: Push Button Request  Requested by operator from the front panel FGClite power cycle leads to power converter failure

CERN 2014 System Failure Modes (1/6) Gateway/SIS communication fails but Fieldbus communication OK 9 FGClite will not power cycle Software Interlock may decide to initiate protective action

CERN 2014 System Failure Modes (2/6) 10 All 30 FGClites will power cycle (triggered by lack of communication with gateway) Software Interlock may decide to initiate protective action After the power cycle FGClites will wait for valid FIP communication Gateway fails and Fieldbus communication fails Failed communication = >1 lost FIP packet

CERN 2014 System Failure Modes (3/6) 11 A single FGClite will power cycle triggered by the gateway Software Interlock may decide to initiate protective action After the power cycle FGClite will wait for valid FIP communication FGClite transmission failure

CERN 2014 System Failure Modes (4/6) 12 A single FGClite will power cycle triggered by the FGClite Software Interlock may decide to initiate protective action After the power cycle FGClite will wait for valid FIP communication FGClite reception failure Failed communication = >1 lost FIP packet

CERN 2014 System Failure Modes (5/6) 13 A single FGClite will power cycle triggered by the gateway Software Interlock may decide to initiate protective action After the power cycle FGClite will wait for valid FIP communication FGClite internal failure

CERN 2014 System Failure Modes (6/6) 14 A single FGClite will power cycle triggered by an operator from the front panel After the power cycle FGClite will wait for valid FIP communication Push Button pressed

CERN 2014 Power Cycle Procedure (1/2) 15 Remote Power Cycle (FD_REQ, NF_REQ, FL_REQ) Procedure: 1.Communicate PWR_FLR 2.Maintain Vref (SIS time to react) 3.Start ramping down (spread time to avoid fast orbit changes) 4.Ramp down (to avoid QPS trigger) & switch off

CERN 2014 Power Cycle Procedure (2/2) 16 Push Button Power Cycle (PB_REQ) Procedure: 1.Communicate PWR_FLR 2.Switch off

CERN 2014 Conclusions 17 FGClite relies on highly available Fieldbus communication FGClite failure modes were identified and power cycle strategy was defined: Power cycle procedure was defined: 1.Gateway failure 2.Field Bus communication failure 3.FGClite transmission/reception failure 4.FGClite internal failure 1.Information to Software Interlock/Power Interlock 2.Ramping down Vref in different times on the same Fieldbus sector

CERN 2014 Radiation Tests/Electrical MTBF 19 Dedicated R2E design/test methodology Electrical MTBF estimation Fieldbus communication tests & burn-in When the FGClite final prototype is finished the MTBF will be computed These computations will be based on Military Handbook (MIL-HDBK-217F) The eletrical MTBF will be compared with the FGC2 system (~1Mhours) Type Testing Status Batch Testing Status The first Fieldbus sector will be populated in Q This will allow Fieldbus communication tests and its error rate measurement First burn-in tests will be performed in Q4 2014