1 Synchronization and Sequencing for high level applications Julian Lewis AB/CO/HT.

Slides:



Advertisements
Similar presentations
Operating System.
Advertisements

Numerology (P. Baudrenghien) For each ring: Convention 1: The 400 MHz RF defines buckets, spaced by one RF period, and numbered from 1 to Convention.
20 September 2000 LHC slow timing implementations brainstorming on slow timing Wednesday 20 September M.Jonker.
Supervision of Production Computers in ALICE Peter Chochula for the ALICE DCS team.
1: Operating Systems Overview
Use of Thin Clients in an Industrial Environment Foxboro Southeast User’s Group Birmingham, Al February 10-11, 2009 Walter Conner Senior Plant Engineer.
The TIMING System … …as used in the PS accelerators.
E. Hatziangeli – LHC Beam Commissioning meeting - 17th March 2009.
Tier 3g Infrastructure Doug Benjamin Duke University.
What Is A Network A network is a group of computers interconnected with communication lines which allows users to share information and resources.
The PC The PC is a standard computing platform, built around a EISA bus (1988) –IBM compatible –“Intel Architecture” from Intel or AMD or other companies.
Operating System. Architecture of Computer System Hardware Operating System (OS) Programming Language (e.g. PASCAL) Application Programs (e.g. WORD, EXCEL)
LOGO OPERATING SYSTEM Dalia AL-Dabbagh
Operating System Review September 10, 2012Introduction to Computer Security ©2004 Matt Bishop Slide #1-1.
Submitted by: Shailendra Kumar Sharma 06EYTCS049.
Your First Azure Application Michael Stiefel Reliable Software, Inc.
ITER – Interlocks Luis Fernandez December 2014 Central Interlock System CIS v0.
Distribution of machine parameters over GMT in the PS, SPS and future machines J. Serrano, AB-CO-HT TC 6 December 2006.
1v1 Availability Tracking as a Means to Increase LHC Physics Production B. Todd 1, A. Apollonio 1 and L. Ponce 1 1 CERN – European Organisation for Nuclear.
Timing upgrades after LS1 Jean-Claude BAU BE-CO-HT1.
CS533 Concepts of Operating Systems Jonathan Walpole.
LHCOP / SPS Cycling for LHC1 SPS cycling for LHC injection J. Wenninger AB/OP Introduction to the timing system. Timing and settings constraints.
WWWWhat timing services UUUUsage summary HHHHow to access the timing services ›I›I›I›Interface ›N›N›N›Non-functional requirements EEEExamples.
Linac4 FBCT’s 18/ L4 BI BCT review Lars Søby On the behalf of: M. Andersen, D. Belohrad, L. Jensen, Franco Lenardon and J. C. Allica Santamaria.
General Time Update David Thompson Epics Collaboration Meeting June 14, 2006.
Maintaining and Updating Windows Server Monitoring Windows Server It is important to monitor your Server system to make sure it is running smoothly.
The CERN LHC central timing A vertical slice Pablo Alvarez Jean-Claude Bau Stephane Deghaye Ioan Kozsar Julian Lewis Javier Serrano.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
The CERN LHC central timing A vertical slice
SNS Integrated Control System Timing Clients at SNS DH Thompson Epics Spring 2003.
LHC BLM Software revue June BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.
Timing Requirements for Spallation Neutron Sources Timing system clock synchronized to the storage ring’s revolution frequency. –LANSCE: MHz.
Sept-2003Jean-Claude BAU1 SEQUENCING AT THE PS Let’s take a quick tour.
1 LTC Timing AB/CO/HT Central timing hardware layout Telegrams and events Postmortem, XPOC LHC Central Timing API Fill the LHC Use Case Julian Lewis.
The CERN LHC central timing A vertical slice Pablo Alvarez Jean-Claude Bau Stephane Deghaye Ioan Kozsar Julian Lewis Javier Serrano.
BROOKHAVEN SCIENCE ASSOCIATES Advanced Monitor/Subscription Mechanisms Ralph Lange EPICS Collaboration Meeting October 11, 2009.
NA61 11 October 2010Light Ion for NA61/ S. Maury1 Light Ion in SPS Foreseen to have primary Ar beam physics in 2012 Nothing runs in 2012 NA61 duty cycle.
Full and Para Virtualization
Architectural issues M.Jonker. Things to do MD was a success. Basic architecture is satisfactory. This is not the end: Understanding of actual technical.
Nominal Workflow = Outline of my Talk Monitor Installation HWC Procedure Documentation Manufacturing & Test Folder Instantiation – NC Handling Getting.
1 LHC Timing Requirements and implementation J.Lewis for AB/CO/HT 21 st /Sep/06 TC presentation.
What Is A Network A network is a group of computers interconnected with communication lines which allows users to share information and resources.
The 1 st year of TMS operations Stéphane Bart Pedersen BE-BI-SW November 2011.
LHC Injection Sequencing MD 16/23/2009 Injection sequencing / BCM R, Giachino, B. Goddard, (D. Jacquet), V. Kain, M. Meddahi, J. Wenninger.
SPS proton beam for AWAKE E. Shaposhnikova 13 th AWAKE PEB Meeting With contributions from T. Argyropoulos, T. Bohl, H. Bartosik, S. Cettour.
CERN Timing Overview CERN timing overview and our future plans with White Rabbit Jean-Claude BAU – CERN – 22 March
Issues concerning Device Access (JAPC / CMW / FESA) With input from: A.Butterworth, E.Carlier, A. Guerrero, JJ. Gras, St. Page, S. Deghaye, R. Gorbonosov,
1 Events for the SPS Legacy & Common Implications.
NETWORKS Network Topologies / Configurations. Learning Objectives Describe for each type of network topology the relative strengths and weaknesses.
CO Timing Review: The OP Requirements R. Steerenberg on behalf of AB/OP Prepared with the help of: M. Albert, R. Alemany-Fernandez, T. Eriksson, G. Metral,
LHC RT feedback(s) CO Viewpoint Kris Kostro, AB/CO/FC.
Lecture 11. Switch Hardware Nowadays switches are very high performance computers with high hardware specifications Switches usually consist of a chassis.
Proposals 4 parts Virtual Accelerator Open CBCM Data Base Cycle Server Quick Fix.
SPS availability K. Cornelis Acknowledgments : A. Rey and J. Fleuret.
Timing Review Evolution of the central and distributed timing How we got where we are and why What are its strengths and weaknesses.
REAL-TIME OPERATING SYSTEMS
LHC General Machine Timing (GMT)
Status and Plans for InCA
Credits: 3 CIE: 50 Marks SEE:100 Marks Lab: Embedded and IOT Lab
Operating System.
Building a Virtual Infrastructure
Database involvement in Timing
Introduction to Operating System (OS)
Interlocking of CNGS (and other high intensity beams) at the SPS
EPICS: Experimental Physics and Industrial Control System
LHC BLM Software audit June 2008.
Office philosophy / AHS+Associates Student Rights Who is Mr. Stilley?
Chapter 2: Building a System
Building a “System” Moving from writing a program to building a system. What’s the difference?! Complexity, size, complexity, size complexity Breadth.
Chapter 13: I/O Systems “The two main jobs of a computer are I/O and [CPU] processing. In many cases, the main job is I/O, and the [CPU] processing is.
Presentation transcript:

1 Synchronization and Sequencing for high level applications Julian Lewis AB/CO/HT

2 Synchronization: How it works PSB 1.1 PSB 1.2 PSB 2.1 PSB 2.2 PS Batch 1PS Batch 2 SPS Bkt 1 Rng 1 2 Batch Acquisition data + Acquisition time stamp Synchronization event + Telegram & Time stamps + Flags: StBm StCy… JAPC C/C++ From DMCRPLS via UDP SC compositionAll timing cables From DSC via MW The DSC timing cableEquipment TGMMW LHC Missed !! OK Data EnBm EnCy StBm StCy Arrival Time Events t=Kt=0 t mean From DMCRPLS via GMT & CTRI

3 Application events 00:Fields : 0x807fffff 01:EvtId : :Catags : 0x e: StrCyc TgmRdy BpRdy EndCyc StrtBm EndBm 03:Machine: PSB 04:BPTime : Tue-22/Mar/ :24: ( S) (917 Ms) 06:SeqNo : :ChsId : Fri-18/Mar/ :52:03 08:Level : 1 09:BmState: Spare 10:BmId : :BmIns : 5 12:BmTime : Tue-22/Mar/ :24: ( S) (917 Ms) 14:CycId : 2: ISOGPS 15:CycInst: 2 16:CycTime: Tue-22/Mar/ :24: ( S) (917 Ms) 18:BPInst : 5 19:AqnTime: Tue-22/Mar/ :24: ( S) (917 Ms) 21:EvtTime: Tue-22/Mar/ :24: ( S) (917 Ms) 31:Telegm : ff 586c bc fb Acquisition time stamp Beam time stamp Telegram Flags

4 Latency for CTRI on pcgw Ker 2.6 Time of 1PPS Interrupt: Mon-18/Apr/ :56: Time of call to driver: Mon-18/Apr/ :56: Latency of driver call: Seconds Average time: 500 calls: Seconds Max and Min: [ ] Time of 1PPS Interrupt: Mon-18/Apr/ :05: Time of call to driver: Mon-18/Apr/ :05: Latency of driver call: Seconds Average time: 265 call: Seconds Max and Min: [ ] ~0.025ms unloaded Wait PPS Read Time Compute latency CTRI DrIverDrIver PPS interrupt Wait PPS Read Time Compute latency Wait PPS Read Time Compute latency Wait PPS Read Time Compute latency 2.5ms very loaded

5 Producing events Event pusher GMT Timing cables PSB/LEI/CPS/ADE/SPS/LHC CBCM Programs BCD/Beams/Cycles etc … Events DTM/UDP Events via GMT Events UDP To non critical systems in CCC, offices and labs For CNGS Grand Saso Mission critical servers and work stations CTGCTRI CCC Reflective memory

6 RF synchronization

7 Go no Go and retries

8 LHC Filling

9 CBCM – Brain critical interactions CBCM LHC BRAIN Post-Mortem Dump Bucket Ring Intensity Batches Start-Ramp Ready Pilot One-Shot Commit Events Data + Stamps from MW CRITICAL Timing Control

10 Field and Timing

11 Added Value (1) All telegrams for all accelerators are available. Acquisition data to cycle correlation. Cycle to Beam & BCD correlation. Missing or late MW data detection. Better handling of errors during an LHC fill allowing quicker response, and less lost super-cycles. Makes ON-Change subscriptions possible. No need to subscribe to telegrams, so MW is less stressed, gets on with essential job. Less loading of the controls network. Precise: (250us - 2.5ms scheduler) using a CTRI. Reliable: CTRI unaffected by network loading. Works without MW - subscription (Pull).

12 Added Value (2) Backwards compatible with PS complex. Real time events can be added as needed. EG Post-mortem, BIC, Commit-Transactions. Works everywhere. (Offices + Technical network). Client code is unaware of the event source. Scalable, IE unaffected by the number of clients. Allows modification of telegram description on the fly.

13 Affect of installing a CTRI on a server or workstation Client software, there is zero effect. Work station configuration, there is zero effect. Hardware, the CTRI cards are reliable, if they don’t work we are in real trouble ! Installed like this Linux DSCs insmod /ps/dsc/mcr/L86/`uname -R`/ctr/CtrModule.ko Linux servers and work stations insmod /ps/mcr/`uname -R`/ctr/CtrModule.ko

14 Cost & Maintenance 150M Cable ( < 1000 Sf) 4 Timing Fan outs ( < 4 x 100 Sf) GMT Cannon-Cables as needed (25 Sf each) CTRI Cards as needed (~800 Sf each) Maintenance of Linux driver: No extra cost Diagnostics: Available, ctrtest is a lot easier to use than DTM Diagnostics. Saves costs.

15 Conclusion: Using CTRIs Installing CTRI cards in critical systems is cheap, easy to do, and greatly increases both run time reliability and timing precision. No effect on clients or platform configuration. Increases the chances of detecting errors early, and hence reduces the number of lost super-cycles during an LHC fill. Less annoying to PS Ops. Increase overall performance by reducing network and MW loads. Completely deterministic. Good and cheap insurance policy; connect stations as needed at 1000Sf per connection. Cost for cables and connectors less than 2000 Sf !! Easier to maintain: I see NO Down Side.