11.12.2003AB-CO TC / J. Wenninger1 Real-time orbit the LHC Summary of the mini-workshop held Oct 6 th 2003 J. Wenninger AB-OP-SPS With emphasis.

Slides:



Advertisements
Similar presentations
8:16 SB 25ns dumped by RF; integrated lumi 0.6 nb-1. 9:14 BIC problem in TI8 and CMS recovering their tracker 10:09 Abort gap cleaning commissioning. 16:29.
Advertisements

Measurements on phase stability at CTF3 Giulio Morpurgo / CERN IWLC 2010.
20 September 2000 LHC slow timing implementations brainstorming on slow timing Wednesday 20 September M.Jonker.
802.11n MAC layer simulation Submitted by: Niv Tokman Aya Mire Oren Gur-Arie.
Chamonix 03 / Presentation 5.5 / J. Wenninger1 Orbit control for machine operation and protection Orbit control requirements Feedback performance.
Network Topologies.
The HiLumi LHC Design Study is included in the High Luminosity LHC project and is partly funded by the European Commission within the Framework Programme.
Alignment and Beam Stability
Volker SchlottIWBS’04, Grindelwald, Report on internal „mini-workshop on beam stability at SLS” SLS PAUL SCHERRER INSTITUT The workshop.
1 Luminosity monitor and LHC operation H. Burkhardt AB/ABP, TAN integration workshop, 10/3/2006 Thanks for discussions and input from Enrico Bravin, Ralph.
● 08:30 Loaded by mistake squeeze function to 2 m  Dumped beams ● 10:30-13:00 Test of new algorithm for long. Emittance blow-up to minimize oscillations.
Real time control Logical architecture Discussion of logical architecture of real time control systems M.Jonker.
BI day 2011 T Bogey CERN BE/BI. Overview to the TTpos system Proposed technical solution Performance of the system Lab test Beam test Planning for 2012.
RHIC Status: Startup Run 12 V. Schoefer RHIC Spin Collaboration Meeting 1/13/12.
#1 Energy matching It is observed that the orbit of an injected proton beam is horizontally displaced towards the outside of the ring, by about  x~1 mm.
LHC Collimation Project Integration into the control system Michel Jonker External Review of the LHC Collimation Project 1 July 2004.
Luminosity expectations for the first years of CLIC operation CTC MJ.
LHC BLM Software revue June BLM Software components Handled by BI Software section –Expert GUIs  Not discussed today –Real-Time software  Topic.
The Main Injector Beam Position Monitor Front-End Software Luciano Piccoli, Stephen Foulkes, Margaret Votava and Charles Briegel Fermi National Accelerator.
FGC Upgrades in the SPS V. Kain, S. Cettour Cave, S. Page, J.C. Bau, OP/SPS April
R. Assmann - LHCCWG Two Beam Operation R.W. Aßmann LHCCWG Acknowledgements to W. Herr, V. Previtali, A. Butterworth, P. Baudrenghien, J. Uythoven,
Nominal intensity bunches ● First ramp with nominal intensity bunches suffered from an instability appearing around 1.8 TeV. ● Nominal intensity bunches.
RHIC Run11 Summary May 6, 2011 RSC Meeting Haixin Huang Luminosity Availability Polarization RHIC setup issues.
B. Caron, G. Balik, L. Brunetti LAViSta Team LAPP-IN2P3-CNRS, Université de Savoie, Annecy, France & SYMME-POLYTECH Annecy-Chambéry, Université de Savoie,
LHC orbit system, performance and stability LHC Beam commissioning workshop, Evian, 19./20. Jan 2010 Kajetan Fuchsberger Thanks for Help and discussion:
‘Review’ of the machine protection system in the SPS 1 J. Wenninger BE-OP SPS MPS - ATOP 09.
Real time performance estimates of the LHC BPM and BLM system SL/BI.
LHC-CC Validity Requirements & Tests LHC Crab Cavity Mini Workshop at CERN; 21. August Remarks on using the LHC as a test bed for R&D equipment.
TC 25th October1 LHC Real-time requirements What is real-time? What is latency? The time between asking for something to be done and it being done The.
Issues concerning Device Access (JAPC / CMW / FESA) With input from: A.Butterworth, E.Carlier, A. Guerrero, JJ. Gras, St. Page, S. Deghaye, R. Gorbonosov,
14/1/2011 LHC Lumi days - J. Wenninger 1 IP positions and angles, knowledge and correction Acknowledgments: W. White, E. Calvo J. Wenninger BE-OP-LHC.
Beam Physics Issue in BEPCII Commisionning Xu Gang Accelerator physics group.
Injection and protection W.Bartmann, C.Bracco, B.Goddard, V.Kain, M.Meddahi, V.Mertens, A.Nord, J.Uythoven, J.Wenninger, OP, BI, CO, ABP, collimation,
06:00 – Ongoing injection problems on beam 2  07:42 – Start of injection investigations  11:11 – Injection problem fixed. Resteering of transfer line.
M.Gasior, CERN-BE-BILHC Optics Measurement and Corrections Review, June Diode ORbit and OScillation (DOROS) System Marek GASIOR Beam Instrumentation.
TC 25th October1 Real-time A system -- e.g., application system, computer system, operating system -- operates in real time to the degree that those of.
CO Timing Review: The OP Requirements R. Steerenberg on behalf of AB/OP Prepared with the help of: M. Albert, R. Alemany-Fernandez, T. Eriksson, G. Metral,
Progress with Beam Report to LMC, Machine Coordination W10: Mike Lamont – Ralph Assmann Thanks to other machine coordinators, EIC’s, operators,
RHIC Status April 8, 2011 RSC Meeting Haixin Huang.
4. Operations and Performance M. Lonza, D. Bulfone, V. Forchi’, G. Gaio, L. Pivetta, Sincrotrone Trieste, Trieste, Italy A Fast Orbit Feedback for the.
LHC RT feedback(s) CO Viewpoint Kris Kostro, AB/CO/FC.
Collimation Aspects for Crab Cavities? R. Assmann, CERN Thanks to Daniel Wollmann for presenting this talk on my behalf (criticism and complaints please.
RF Commissioning D. Jacquet and M. Gruwé. November 8 th 2007D. Jacquet and M. Gruwé2 RF systems in a few words (I) A transverse dampers system ACCELERATING.
Threading / LTC/ JW1 How difficult is threading at the LHC ? When MADX meets the control system … J. Wenninger AB-OP &
Fabio Follin Delphine Jacquet For the LHC operation team
Luminosity monitor and LHC operation
A monitoring system for the beam-based feedbacks in the LHC
2007 IEEE Nuclear Science Symposium (NSS)
J. Wenninger AB-OP-SPS for the non-dormant AB feedback team,
LHC Commissioning with Beam
Emittance growth AT PS injection
Results of the LHC Prototype Chromaticity Measurement
Status of Magnet Setup Cycling for LHC
Real-time orbit the LHC
Intensity Evolution Estimate for LHC
Fill 1410 revisited Peak luminosity 1.4e32 Beam current 2.68/2.65 e13
Tuesday TOTEM and transverse loss map (1) OK
Orbit feedback for collimation
Interlocking of CNGS (and other high intensity beams) at the SPS
Limits on damping times
450 GeV Initial Commissioning with Pilot Beam - Beam Instrumentation
Saturday 7th May Sat – Sun night
Collimation margins and *
Orbit Feedback / Chamonix 03 / J. Wenninger
Machine Tolerances in Cleaning Insertions
LHC BLM Software audit June 2008.
Machine protection and closed orbit
From commissioning to full performance…
Feedbacks & Stabilization Getting them going
LHC An LHC OP guide… under construction J. Wenninger
Presentation transcript:

AB-CO TC / J. Wenninger1 Real-time orbit the LHC Summary of the mini-workshop held Oct 6 th 2003 J. Wenninger AB-OP-SPS With emphasis on control aspects Details of all presentations are available on :

AB-CO TC / J. Wenninger2 Real-time orbit control : what’s that ? The aim of real-time orbit control system is to stabilize the orbit of the LHC beams during ALL operational phases within the required tolerances. It is a real-time system in the sense that the system must be deterministic – this very important during critical phases. The LHC system is ‘unique’ because it is distributed over a large geographical area and because of the large number of components. ‘Controller’PC systemBPM system Network Very very schematically - we have 5 players : Beams

AB-CO TC / J. Wenninger3 People The partys that are inlvolved : BDI : beam position system PC : orbit corrector control CO : communication, servers, ‘controls infrastructure’ OP : main ‘user’ Core-team for prototyping work at SPS BDI : L. Jensen, R. Jones BPM HW & Readout CO : J. Andersson, M. Jonker Server, PC control OP : R. Steinhagen, J. WenningerAlgorithms, FB loop, measurements But also : M. Lamont, Q. King, T. Wijnands, K. Kostro and others

AB-CO TC / J. Wenninger4 Orbit the LHC Requirements :  Excellent overall control over the orbit during all OP phases. RMS change < 0.5 mm – for potential perturbations of up to 20 mm.  Tight constraints around collimators (IR3 & 7), absorbers … : RMS change < ~50-70  m for nominal performance.  ‘New’ and very demanding requirement from the TOTEM exp. : Stability of ~ 5  m over 12 hours around IR5. Main problem : not sure the BPMs are that stable in the first place. EXPECTED sources of orbit perturbations :  Ground motion, dynamic effects (injection & ramp), squeeze.  Mostly ‘drift- or ramp-like’ effects.  frequency spectrum is < 0.1 Hz

AB-CO TC / J. Wenninger5 Limitations from power converters & magnets There are orbit corrector magnets per ring and per plane (mostly cold). SC orbit correctors :  Circuit time constants :   10 to 200 s (arc correctors ~ 200 s). For comparison, in the SPS :   0.5 s  EVEN for SMALL signals, the PCs are limited to frequencies ≤ 1 Hz. At 7 TeV small means really small : ~ 20  m oscillation / 1 Hz. Warm orbit correctors : only a few / ring  Circuit time constants  ~ 1 s  PC can run them > 10 Hz.  But there are too few of them to make anything useful with them !  PCs will limit correction by the FB to frequencies ≤ 1 Hz !

AB-CO TC / J. Wenninger6 Real-time…                 Real-time control implies deterministic correction / control  stable delays (at least within some tolerances) Digital loop performance depends on : Sampling frequency f s Delay As a rule of thumb, given the highest frequency f p max at which the FB should perform well, f s >  f p max i.e. f p max = 0.1 Hz  f s ~ 3 Hzexpected ‘noise’ = 1 Hz  f s ~ 30 Hzto reach the PC limit Delay < (1/ f p max )  0.05 ~ ms sampling I recommend to use the PC limit of 1 Hz as design target and not the expected noise : gives margin + best use of HW !

AB-CO TC / J. Wenninger7 RT feedback building blocks PC + magnet Beam(s) “Noise” Correction algorithm FB controller BPM Reference + - This RT loop spans the entire LHC machine. For good performance : the reliability of each component must be adequate. the delays must be ‘short’ ~ O(100 ms) and stable. Network Each component has its own characteristic transfer function that must be know to design the controller.

AB-CO TC / J. Wenninger8 Global architecture Central entire information available. all options possible. can be easily configured and adapted. …  network more critical : delays and large number of connections.  … FB Local / IR reduced # of network connections. less sensitive to network. numerical processing faster. …  less flexibility.  not ideal for global corrections.  coupling between loops is an issue.  problems at boundaries. .. Preferred !! For the moment… IR

AB-CO TC / J. Wenninger9 Beam Position Monitors Hardware :  500 position readings / ring / plane ~ 1000 BPMs for the 2 rings  Front-end crates (standard AB-CO VME) are installed in SR buildings 18 BPMs (hor + ver)  36 positions / VME crate 68 crates in total  8-10 crates / IR Data streams :  Nominal sampling frequency is 10 Hz – but I hope to run at 25 Hz… > 100 times larger than the frequency of fastest EXPECTED CO perturbation.  Average data rates per IR : 18 BPMs x 20 bytes ~ 400 bytes / sample / crate 140 BPMs x 20 bytes ~ 3 kbytes/ sample / 25 Hz – from each IR : Average rate ~ 0.6 Mbit/s Instantaneous rate ~ 25 Mbit/s (1 msec burst) 40 ms

AB-CO TC / J. Wenninger10 More on BPMs An alternative acquisition mode is the multi-turn mode :  For each BPM one can acquire up to 100’000 turns of data / plane.  The acquisition itself does not interfere with RT close orbit, but readout + sending to the user does !! Data volumes :  100’000 x 2 (planes) x 18 (BPMs) x 4 bytes ~ 16 Mbytes / crate  This data must be extracted without interfering with RT closed orbit.  There are even proposals to ‘feedback’ at 1 Hz on machine coupling… with such data (only 1000 turns !) : Per IR : 10 x 8 x 16/100 Mbit/s ~ 10 Mbit/s We must carefully design the readout to prevent ‘destructive’ interference of other ‘BPM services’ with RT closed orbit !

AB-CO TC / J. Wenninger11 LHC Network What I retained from a (post-WS) discussion with J.M. Jouanigot  It has lot’s of capacity > 1 Gbit/s for each IR.  It has very nice & very fast switches (  s switching time).  It has redundancy in the connections IR-CR.  Is it deterministic ? It is not - but delays should be small (< 1 ms), and stable if the traffic is not too high. All users are equal – but 2 network profiles are available in case of problems.  With the SPS network renovation, we will be able to test a network that looks much more lHC-like in  I suspect that as long as we do not know details on data rates, it is difficult to make precise predictions for the LHC.

AB-CO TC / J. Wenninger12 ‘Controller’ Main task of the controller / central server(s) :  Swallow the incoming packets (~ 70 / sampling interval).  Reconstruct the orbit & compare to reference orbit.  Calculate (matrix multiplication … or more) new deflections.  Apply control algorithm to new and past deflections.  Verify current & voltage limits on PCs.  Send corrections out… Other tasks :  Monitor HW status (BPMs & correctors), feed back on response matrices.  Get beam energy & machine state info (  algorithm, optics, reference orbit…).  Logging & post-mortem.  Interaction with operation crews (ON, OFF, parameters…).  The controller will (most likely) consist of a number of threads that will be running on a dedicated ‘machine’ and that need some form of RT sheduling and synchronization !

AB-CO TC / J. Wenninger13 PC control Architecture :  Each PC is controlled by one Function Generator Controller (FGC).  Up to 30 FGCs (PCs) per Worlfip bus segment.  1 gateway controls a given Worldfip segment.  Orbit correctors are accessed over ~ 40 gateways. FGC PC FGC PC FGC PC FGC PC Gateway WorldFip Timing & access :  The WorldFip bus is expected to 50 Hz – 20 ms cycle.  the FB sampling frequency must be f s = 50 Hz / n n=1,2,3….  The delay (WorldFip + PC set) is ~ ms.  Present idea is to send all settings to some ‘central’ PO gateways that will dispatch the data to the lower level gateways & Worldfip.

AB-CO TC / J. Wenninger14Schematically… WF PC FE WF PC FE WF PC FE WF PC FE BPM FE BPM FE PC Gw PC Gw PC Gw FB ‘servers’ Present architecture, as seen by the parties that are involved Remove this layer ? PO gateways to hide HW details from the clients

AB-CO TC / J. Wenninger15 Delays Estimated delays – in ms :  BPMs5-10  Network / inbound1  Packet reception30  Correction10-40  Packets out10  Network / outbound1  PC control30-40  Total ms Just acceptable if you consider the PC limits of 1 Hz. For a 25 Hz sampling rate, this is already > 1 period !

AB-CO TC / J. Wenninger16 Synchronization  All BPM crates are synchronized via BST to 1 turn.  All PC WF segments are synchronized (GPS).  Synchronize RT acquisition and WF segments to maintain stable delays. BPM Controller PC WF ‘tick’ Constant !

AB-CO TC / J. Wenninger17 Orbit drifts in the “FB perspective” Consider : FB sampling rate :10 Hz BPM resolution :5  m (~ nominal LHC filling) Tolerance : 200  m (injection/ramp), 70  m (squeezed, physics) Compare orbit drift rates in some ‘typical’ and most critical situations.. PhaseTotal drift/drift/No. samples total durationFB intervalto reach tolerance Start ramp2 mm / 20 s10  m 20 (‘snapback’) Squeeze 20 mm / 200 s10  m 7 Physics4 mm / hour 1  m70* (LEP, pessimistic) Note : those are approximate numbers, but they give an idea of the ‘criticality’ of the communications. (*) : not for TOTEM…

AB-CO TC / J. Wenninger18 What happens if we lose a sample ?  During the ramp and squeeze phases : Not nice, but not yet death – we have a small ‘margin’.  In collisions (not for TOTEM !!), during injection : Most likely nobody will notice (except the FB itself), provided we hold the latest settings. If conditions are similar to LEP, we can afford to loose a few samples at 7 TeV.  We can also rely on reproducibility to feed-forward average corrections from one fill to the next (only ramp & squeeze)  may reduce the workload on the FB by ~ 80% - but not guaranteed !  we become less sensitive to packet losses !  We must keep in mind that : Packet losses remain a hassle and require more conservative gain settings ! We cannot tolerate to have long drop-outs (> 500 ms) in critical phases. Loosing a front-end is also an issue - it is more complex to handle than a lost packet !

AB-CO TC / J. Wenninger19 SPS prototyping in 2003 bpl50sabcmw1m2sba5 BA5PCR BA5 SPS network SPS network BPM Control server PCs (ROCS)  BPMs : VME crate with LHC ACQ cards ~ identical to LHC, but only 4 BPMs (6 in 2004).  Communication: BPMs  Server : UDP (and later CMW / TCP) Server  PCs :UDP  Central control server for correction, gain control, data logging…  Maximum sampling rate pushed to 100 Hz !  Present ( ‘old’ ) SPS network was a problem – ‘frequent’ packet losses. sometimes important delays (>500 msec). An extremely valuable exercise – a lot of time was spend testing the system on the LHC beams.

AB-CO TC / J. Wenninger20 And it worked …very well ! ramp injection at 26 GeV 450 GeV feedback off feedback on feedback on (zoom) BPM Reading (  m) Time (ms) ~ measurement noise !!

AB-CO TC / J. Wenninger21 Looking LEP / I Although the issues at LEP & LHC are of a different nature, one can learn from LEP :  No real-time orbit acquisition at LEP.  Very strict orbit control required to achieve best luminosity.  Orbit drifts were due to slow ground motion & low-beta quad movements.  During ‘physics’ (i.e. stable collisions) the orbit was stabilized by a feedback.  FB parameters : Sampling period ~ 15 and 120 seconds. Non-deterministic, variable delay > 5 seconds. Corrections were triggered above a threshold : ~ 50 to 200  m rms.  FB performance : It had no problem to stabilize the orbit to < 100  m (if desired !). We regularly operated with 1 to 3 missing BPM FEs (power supplies…)  no incidence on performance – thank you GLOBAL FB ! Since the same tunnel will host the LHC, there is a fair chance that the LHC beams will exhibit similar drifts in physics. But you are never 100% sure & and LEP was not critical for beam loss !

AB-CO TC / J. Wenninger22 Looking LEP / II The ramp & squeeze were the actual machine efficiency killers :  A large fraction of beams that went into the ramp never made it into physics.  The culprits : Tune control  corrected from 1997 onwards by a real-time tune FB. Orbit control : remained a problem until the last day !  The problem : Orbit changes in R & S were large (many mm rms). The orbit changes were not sufficiently reproducible (long access…).  Feed-forward of corrections was not sufficiently predictable. Ramp commissioning and cleaning was very difficult. A 1-2 Hz orbit FB would have cured all our R & S problems ! I think that it is in the ramp and squeeze phases that the orbit FB will be most useful and critical for the LHC ! Again, LEP survived because beam loss was no isssue ! For the LHC we have a chance to anticipate !

AB-CO TC / J. Wenninger23 Conclusions The importance of an orbit FB for the LHC was recognised at an early stage of the LHC design. As a consequence both BPM and PC systems were designed with RT capabilities. The RT orbit system must be commissioned at an early stage of the machine startup in 2007 – possibly before we first ramp the beams. With the SPS we have a valuable (although limited) test ground for ideas and implementations – in particular for controls issues. We still have some time, but there are number of items to be tackled and some design choice to be made.

AB-CO TC / J. Wenninger24 Hist list of issues 1 - Data collection from many clients***(*) Delays, reliability…. 2 - Network AND front-end availability*** Packet loss rate, delays… 3 - RT operating systems & sheduling** The SPS tests were based on fast response, not determinism ! … as I see / feel them at the moment

AB-CO TC / J. Wenninger25 Future efforts Operating system & process sheduling :  Interference RT tasks & heavy data transfer in BPM Front-ends. Tests possible with SPS setup –  RT scheduling on orbit server(s) side. Re-design the SPS prototype with LHC oriented & re-usable architecture – Network / data collection :  Check out the new IT network in the SPS in  Tests in the IT lab / SM18 (traffic generators…).  Question : do we need a dedicated network ? The need is not 100% obvious to me, but if we get, we take it ! Must span ~ all the LHC !  Data collection tests. We must ‘convince’ ourselves that a central server can swallow Hz over > 24 hours without crashing & with adequate response time.

AB-CO TC / J. Wenninger26 Future efforts / open questions II Architecture and overall optimization :  Optimization of BPM layout in FEs, number of correctors…  may reduce the number of clients by 20 or so.  Accelerator simulations of faults… … Synchronization :  Synchronization scheme for BPMs, FB server & PCs. SPS tests : 2004  Continue studies in the SPS (Network, loop, BPM Front-end…).  ‘Interaction’ FB & proto-type collimators in LSS5.

AB-CO TC / J. Wenninger27 It’s not all orbit… Eventually we also have to deal with :  Q (Tune) feedback.  Chromaticity feedback ?  Feed-forward of multipoles from SM18 reference magnet. Those systems are simpler because :  1 ‘client’ that generates data & much smaller number of PCs. …and more delicate because :  Measurement rates depend on beam conditions (Q, Q’).  Measurement number / fill may be limited – emittance preservation.  Feed-forward is intrinsically more tricky. So far little controls activity… Next event : workshop on reference magnets / feed-forward in March 2004 by L. Buttora, J. Wenninger et al. – to be confirmed.