Sundry LHC Machine Development starts 19 June –Original plan to have 90m comm. next week was torpedoed by private discussions between spokesperson and.

Slides:



Advertisements
Similar presentations
O. Buchmueller, Imperial College, W. Smith, U. Wisconsin, UPO Meeting, July 6, 2012 Trigger Performance and Strategy Working Group Trigger Performance.
Advertisements

Status & Goals of LHC 1 1pb-1 of delivered and recorded luminosities reached! As of this morning: Delivered: 1420 nb-1 Recorded: 1245 nb-1 New record Inst.
MINER A NuMI MINER A DAQ Review 12 September 2005 D. Casper UC Irvine WBS7: Electronics and Data Acquisition  Overview (this talk): D. Casper  Front-end.
M. Ellis - 11th February 2004 Sci Fi Cosmic Light Yield 2 views 3 views.
13 January All Experimenters’ MeetingAlan L. Stone - Louisiana Tech University1 Data Taking Statistics Week of 2003 January 6-12 We had to sacrifice.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
RPC Readiness for Data-taking RPC Collaboration 1.
20 Oct 2009G. Rakness (UCLA)1 Powering on… Last week: CMS off to give evaporation time to dry out the experiment Monday – Tuesday: Began powering on CSCs.
Abbreviated list of April Global Run (AGR) achievements All: clock frequency scan ECAL/RCT/L1: test (28 of 500) new oSLB  oRM links Strips: run with all.
H. MatisTracking Upgrade Review – Dec. 7, SSD Update Howard Matis.
M. Gilchriese ATLAS Upgrade Introduction January 2008.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester Aug 5, 2013.
CPT Week, April 2001Darin Acosta1 Status of the Next Generation CSC Track-Finder D.Acosta University of Florida.
Cover for special hardcover edition of Phys. Lett. B featuring CMS+ATLAS Higgs results 31 July 2012G. Rakness (UCLA)1 The articles were submitted today.
CSC Synchronization documentation Step-by-step procedure to… – Read out a chamber – Synchronize triggers from many chambers (and read out data) Description.
Various Wrote Run Coordination job description – Found this exercise to be useful as CSC Operations guy, so decided to do it as Deputy Run Coordinator…
17 February All Experimenters’ MeetingAlan L. Stone - Louisiana Tech University1 Data Taking Statistics Week of 2003 February Dedicated some.
Collisions… … with “non-stable beams” maybe on Thursday –Got experts to converge on goal and how to do it … to check timing, using L1 = zero-bias seeding.
Plan for 2009 CALICE beam test with ScECAL + AHCAL + TCMT Jan-28 th CALICE TB Satoru Uozumi (Kobe)
Draft of talk to be given in Madrid: CSC Operations Summary Greg Rakness University of California, Los Angeles CMS Run Coordination Workshop CIEMAT, Madrid.
Reflections of a System Run Coordinator Or Equivalent! Bruce M Barnett STFC, Rutherford Appleton Laboratory L1Calo Collaboration Meeting January.
UCLA group meeting1/11 CSC update – a 2-week summary Status of CMS at LHC: L=2*10 32 reached 25-Oct-2010 (=the original goal for 2011) and 42 pb -1 collected.
T. Kawamoto1 ATLAS muon small wheels for ATLAS phase-1 upgrade LHCC T. Kawamoto (New) small wheels ? Why new small wheels for high.
4 Dec 2008G. Rakness (UCLA)1 Online Software Updates and RPC data in the RAT …including Pad Bit Mapping and Efficiency… Greg Rakness University of California,
D0 Status: 01/14-01/28 u Integrated luminosity s delivered luminosity –week of 01/ pb-1 –week of 01/ pb-1 –luminosity to tape: 40% s major.
All CMS 30 May07 TSV1 All CMS 3 30 May 2007 LHC machine CMS Progress Overall Schedule.
All Experimenters MeetingDmitri Denisov Week of July 7 to July 15 Summary  Delivered luminosity and operating efficiency u Delivered: 1.4pb -1 u Recorded:
Transversity Readiness and Timeline Xiaodong Jiang, March 17th, This document specifies the overall readiness for Hall A experiments E06-010/E
D0 Status: 04/01-04/08 u Week integrated luminosity –1.7pb -1 delivered –1.5pb -1 utilized (88%) –1.1pb -1 global runs u Data collection s global data.
Activity related CMS RPC Hyunchul Kim Korea university Hyunchul Kim Korea university 26th. Sep th Korea-CMS collaboration meeting.
1J. Gu CERN CSC Meeting July 2008 DAQMB/FEBs Readiness for First Beam Jianui Gu The Ohio State University.
CMS livetime during last week’s runnin 11 Sep 2012G. Rakness (UCLA)1 Fills  1.02/fb delivered, 0.98/fb recorded 96.1% live ~2.4% not running.
1 Run7 startup M. Sullivan MAC Review Nov , 2007 M. Sullivan for the PEP-II Team Machine Advisory Committee Review November 15-17, 2007 Run 7 Startup.
HPS TDAQ Review Sergey Boyarinov, Ben Raydo JLAB June 18, 2014.
All Experimenters MeetingDmitri Denisov Week of December 23 to December 30 D0 Summary  Delivered luminosity and data taking efficiency u Delivered: 5.8pb.
MOM Report Linda Coney MICE Operations Manager University of California, Riverside MICE Video Conference May 20, 2010.
LHC CMS Detector Upgrade Project RCT/CTP7 Readout Isobel Ojalvo, U. Wisconsin Level-1 Trigger Meeting June 4, June 2015, Isobel Ojalvo Trigger Meeting:
Control Room and Shift Operations: CMS Greg Rakness (CMS Deputy Run Coordinator) University of California, Los Angeles ATLAS Post-LS1 Operations Workshop.
TOF status and LS1 plansTOF status and LS1 plans 27/06/2012.
10/20/09 UCLA Meeting Hauser 1/9 Various updates Update on LHC/CMS schedule: First shots into collimator: Nov. 7 and 8! Circulating beam ~Nov. 18, less.
Preparation for Running Outline Present Schedule Detector Preparation Requests for dedicated runs –Cosmics, Splashes, Alignment Muon Shifts Experts A.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester Sep14, 2015.
Minerva Test Beam 2 Status Geoff Savage for the Test Beam 2 Team March 23, /23/2015AEM - Minerva Test Beam 2 Status1.
Arnd Meyer (RWTH Aachen) Sep 8, 2003Page 1 Integrated Luminosity Total data sample on tape with complete detector > 200pb -1.
Management hoo-hah Created short list for DPG coordinator ( ) – Talked with several people to converge on the list – Tiziano now has the list and.
Status at CERN 8 fills  ~500/pb since last Tuesday – ~5.3/fb delivered, ~4.8/fb recorded, … SEU workshop on Friday – Different detectors have different.
Introduction to L1Calo Upgrade L1Calo Collaboration Meeting Cambridge 23-Mar-2011 Norman Gee.
ECAL Shift Duty: A Beginners Guide By Pourus Mehta.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester June 3, 2013.
Run 14 RHIC Machine/Experiments Meeting 24 June 2014 Agenda: Run 14 Schedule (Pile) Machine Status (Robert-Demolaize) STAR and PHENIX Status (Experiments)
M4 Operations ● Operational model for M4 ● Shifts and Experts ● Documentation and Checklists ● Control Room(s) ● AOB Murrough Landon 24 July 2007.
Various Gave presentation at TIG meeting on p5 control room. Feedback included… Proposal should explicitly cover all phases of CMS operations Running.
Fabio Follin Delphine Jacquet For the LHC operation team
LHC Status Fri Morning 3-June
CLAS12 DAQ & Trigger Status
Recall: CSC TF Halo rate spikes during run (23 May)
904 Status Recall last Group Meeting…
LHC during week 45 LHC delivered ~0.837 fb-1 in week 45
UCLA Meeting: Notes TMB upgrade for ME1/1 issues
Topic A In case anyone is interested in ALCT 3.3v
“Golden” Local Run: Trigger rate = 28Hz
Run16 Y. Akiba.
CSC Trigger Primitives Test Beam Studies
Yesterday’s morning’s controls issues
Friday 07:00 Fill 1729 dumped. Movement device allowed flag switching to FALSE. From the logs, it seems the energy went to FFFF/7860 GeV. To be confirmed.
Sector Processor Status Report
Plans for the 2004 CSC Beam Test
LS 1 start date 12th June Schedule Extension 2012 run Extension of 2012 run approved by the DG on 3rd July 2012.
19-April-05 CSC ME1/1 Meeting
CSC Electronics Problem Report CSCE I&C
Presentation transcript:

Sundry LHC Machine Development starts 19 June –Original plan to have 90m comm. next week was torpedoed by private discussions between spokesperson and CMS D.G… –This means that proton data up to 18 June may be able to be used for ICHEP conference Turbine serving 2 CSC peripheral crates broke last night –“Although there was no beam in LHC it was not possible to organize an access during the night to change the turbine, indicating once more that our manpower with permission to intervene on electrical installations is not sufficient.” XEB report from Tech. Coord. –Impact: today’s fill 2725 ran with 18 chambers in CSC not operational (9 in VME–2/6 and 9 in VME–3/6 (~110/pb) –Access currently underway to replace this turbine –Misha and Armando (and I) are not happy 12 June 2012G. Rakness (UCLA)1

Pop quiz: can you see the problem? 12 June 2012G. Rakness (UCLA)2

Next topic… organization of a test an increase of L1A latency Motivation: CMS L1 trigger hardware will require upgrades in order to handle the ever increasing luminosity –Updated hardware to be installed already in LS1 –(One place that has been pointed out to me as particularly needful is DTTF) Because any upgrade of trigger hardware has the potential to increase the L1A latency, need to understand the implications on CMS data taking –Do this now so that development can proceed 12 June 2012G. Rakness (UCLA)3

Developing the plan 26 March: +5bx test? Run Meeting dedicated to discussions of latency test identified ECAL Preshower (ES) as a possible limitation – –Subsequent discussions indicated that the Tracker might be protecting the ES? So… is Tracker the limiting factor? Trigger meetings: +10bx test? –Now Wesley knows about the ES-Tracker situation. He pushes the issue… Discussions: Dave Barney realizes we can quantitatively determine the limitations of Tracker + ES using random L1As at high rate (w/o beam) –Possible limitation: the period of the pseudo-random number generator in the GT would limit the scope of this test once we hit the “wraparound point” –Idea: use CSC single-layer trigger as a source of truly random triggers 11 June Run Meeting and 12 June XEB meeting: plan converged – –Details in backup slides 12 June 2012G. Rakness (UCLA)4

In broad brushes June Machine Development (MD): determine the latency limitations in ES + Tracker –Determine where the “ES + Tracker wall” is. Back off by a bit. This is the max. latency increase we know we can safely run with –Meeting today with ES, Tracker, & Trigger experts on what to do June Technical Stop (TS): Increase L1A latency by +N bx at GT, increase readout latency by +N bx for all subsystems –“N” determined from MD test –Confirm settings in all detectors are OK with cosmics during TS After taking some collision data: roll back to current timing At any point in the above, if we see a problem, we roll back. Subsystems will have to monitor what they see at each step… 12 June 2012G. Rakness (UCLA)5

Q: when to roll back? We need to run with some collisions to confirm that all subsystems can run with increased latency –Natural time to do this is just after the Technical Stop Recall: LHC slowly increases the number of bunches in the machine after a Technical Stop –Recall April plan: 1 fill of ~6 hr w/84b, 480b, 840b, 1380b… but they had a few problems and had to run w/1092 for a few fills To remove (most of the) questions about the effectiveness of this latency test, I claim we should run with the highest possible luminosity and trigger rates –This means we should keep the increased latency until after the first 1380x1380 (good fill) during the ramp-up after the TS –After a bit of discussion, the XEB was happy with this (including Wesley, Darin, Dave, Tiziano, Joao…) 12 June 2012G. Rakness (UCLA)6 See next page…

Luminosity during the fills after the April Technical Stop Q: How much data do we collect during the ramp-up? 12 June 2012G. Rakness (UCLA)7 Total lumi delivered before reaching a good 1380x1380 fill = 487.6/pb Total lumi delivered before reaching a good 1380x1380 fill (if all went perfect) = 280/pb

To do Run Coordination –Finish overhauled shift leader checklist –Make sure global runs are in LS1 schedule –Go over downtimes in recent running. Anything we can do globally to address or reduce them? CSC –Put firmware into SVN –Document ALCT/CLCT muonic timing –RAT firmware loading As well… –Make neutron skims 12 June 2012G. Rakness (UCLA)8

Backup slides 12 June 2012G. Rakness (UCLA)9

More detailed plan (MD) Meeting held between ES, Tracker, GT experts on Tuesday 12 June to sort out the details of these tests Tues 19 June: understand the functioning of ES front-end –Need: ES, DAQ, GT (random L1As from GT are OK for this test) Wed 20 June: measure ES pipeline overflow probability vs. latency –For this test, we will determine where we hit the wall… –Need: ES, GT, DAQ, CSC (to provide truly random triggers at 100kHz) Thur 21 June: Test that Tracker protects ES. Determine L1A latency limitations governed by the Tracker. –For this test, we will determine where we hit the wall… –Need: Tracker, ES, GT, DAQ, CSC Friday 22 June: Based on the above tests, experts in ES, Tracker, and Trigger determine and propose a latency shift for the rest of CMS to test –This value will be sufficiently away from the wall as to make sense 12 June 2012G. Rakness (UCLA)10

More detailed plan (TS) Mon-Tues June (first night of Technical Stop): Cosmics overnight with +N bx –Our understanding is that timing can be determined to be OK to ~bx precision with an overnight cosmic run –This does not mean you cannot do other things. It just means you should be ready to prove that +N bx is OK for cosmic rays Tues 26 June: Subsystems report on latency with cosmics –Can report at 9:30 daily meeting Overnights during Technical Stop: Cosmic run overnight with +N bx –Subsystems continue to work until the latency is right At some point after some collision data: roll back all subsystems and GT to present latency –Hopefully this is “easy” 12 June 2012G. Rakness (UCLA)11