Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 1 Tevatron and DØ Status and Plans Arnd Meyer, RWTH Aachen DØ Germany Meeting December 4 th, 2003.

Slides:



Advertisements
Similar presentations
H1 SILICON DETECTORS PRESENT STATUS By Wolfgang Lange, DESY Zeuthen.
Advertisements

CDF Operations Bob Wagner, ANL May 09, 2005 All Experimenters’ Meeting.
The Silicon Track Trigger (STT) at DØ Beauty 2005 in Assisi, June 2005 Sascha Caron for the DØ collaboration Tag beauty fast …
Status of Eric Kajfasz CPPM/Fermilab GDR-Susy, Lyon - 26 novembre 2001.
All DØ Meeting December 1, 2000 Uncle DØ Needs you!!! Commissioning Status & Plans Jae Yu All DØ Meeting Dec. 1, 2000 What have we been doing? What is.
Arnd Meyer (RWTH Aachen) Feb 25, 2004Page 1 Status of Data Taking Arnd Meyer, RWTH Aachen D Ø Collaboration Meeting February 25 th, 2004.
MOM Report Ray Gamet MICE Operations Manager University of Liverpool MICO 11 th May 2009.
June 2-4, 2004 DOE HEP Program Review 1 M. Sullivan for the PEP-II Team DOE High Energy Physics Program Review June 2-4, 2004 PEP-II Status and Plans.
Atlas SemiConductor Tracker Andrée Robichaud-Véronneau.
Arnd Meyer (RWTH Aachen) Oct 8, 2003Page 1 Data Taking Status and Plans Achievements Concerns Plans Arnd Meyer, RWTH Aachen D Ø Collaboration Meeting October.
J. Estrada - Fermilab1 AFEII in the test cryostat at DAB J. Estrada, C. Garcia, B. Hoeneisen, P. Rubinov First VLPC spectrum with the TriP chip Z measurement.
MUID Status: General Detector Health In addition to two disabled HV chains there are four other chains (out of a total of 600) that are largely or totally.
2011 HV scan SF6 flow-meter accident 2011 Results comparison RPC HV efficiency scan Pigi Paolucci on behalf of RPC collaboration.
Pelletron Battle – Field Report All Experimenters’ Meeting August 15, 2011 L. Prost (for A. Shemyakin)
13 January All Experimenters’ MeetingAlan L. Stone - Louisiana Tech University1 Data Taking Statistics Week of 2003 January 6-12 We had to sacrifice.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
MICE CM26 March '10Jean-Sebastien GraulichSlide 1 Detector DAQ Issues o Achievements Since CM25 o DAQ System Upgrade o Luminosity Monitors o Sequels of.
F All Experimenters' Mtg - 9 Jun 03 Week in Review: 06/02/03 –06/09/03 Keith Gollwitzer – FNAL Stores and Operations Summary Standard Plots.
BEACH Conference 2006 Leah Welty Indiana University BEACH /7/06.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester July 8, 2013.
FPD Status and Data Quality Andrew Brandt UTA Q4 D S Q3S A1A2 P 1 UP p p Z(m) D1 Detector Bellows Roman Pot P 2 OUT Q2 P 1 DN P 2 IN D2.
P5 recommendation on Tevatron running Report of June 8 meeting FNAL.
ATLAS Liquid Argon Calorimeter Monitoring & Data Quality Jessica Levêque Centre de Physique des Particules de Marseille ATLAS Liquid Argon Calorimeter.
Ronald Lipton PMG June Layer 0 Status 48 modules, 96 SVX4 readout chips 6-fold symmetry 8 module types different in sensor and analog cable length.
Dmitri Denisov D0 Weekly Summary: August 18 to August 24  Delivered Luminosity and operating efficiency u Delivered 6.9pb -1 u Recorded 6.0pb -1 (87%)
17 February All Experimenters’ MeetingAlan L. Stone - Louisiana Tech University1 Data Taking Statistics Week of 2003 February Dedicated some.
M. Weber, AEM 4/5/041 Weekly report 3/29-4/4 Delivered:7.7pb -1 Recorded:6.3pb -1 (82%) 12.6M events.
All Experimenters MeetingDmitri Denisov Week of July 16 to July 22 D0 Summary  Delivered luminosity and operating efficiency u Delivered: 1.6pb -1 u Recorded:
PDT’s  Control Board DSP Software (Sten,HH) u Updated so as to include the wire-length in the module data  Readout u All 6 crates in global run. u 90/94.
Bernhard Schmidt DESY - HH PRC open session, October 30, 2002 HERA-B.
ALICE Pixel Operational Experience R. Santoro On behalf of the ITS collaboration in the ALICE experiment at LHC.
D0 Status: 01/14-01/28 u Integrated luminosity s delivered luminosity –week of 01/ pb-1 –week of 01/ pb-1 –luminosity to tape: 40% s major.
All Experimenters MeetingDmitri Denisov Week of July 7 to July 15 Summary  Delivered luminosity and operating efficiency u Delivered: 1.4pb -1 u Recorded:
18 March 2002 All Experimenters’ Meeting Alan L. Stone Louisiana Tech University 1 DØ Status: 03/11 – 03/18 Week integrated luminosity –1.1 pb -1 delivered.
D0 PMG 6/15/00 PMG Agenda June 15, 2000  Overview (Tuts) u Detector status u Reportable milestones u Financial status u Summary  Response to DOE review.
D0 Status: 04/01-04/08 u Week integrated luminosity –1.7pb -1 delivered –1.5pb -1 utilized (88%) –1.1pb -1 global runs u Data collection s global data.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester June 24, 2013.
MINER A Operations Report All Experimenters’ Meeting Steve Hahn, FNAL July 1, 2013.
Arnd Meyer (RWTH Aachen) July 18, 2003Page 1 Data Taking July 11 – 18 Store # BeginLumDuration 2774Thu 15:373.3E hrs 2780Fri 20:403.6E hrsTRF4.
Muon Tracking Meeting SOR issue First priority Diagnostic: MCH DetectorScriptError Arriving at the initialization stage: the whole or part.
FNAL All Experimenters Meeting January 7, Leslie Groer Columbia UniversityDØ Status 1 DØ Status and Progress Dec 17, 2001 – Jan 7, 2002  Reasonably.
Paul Balm - EPS July 17, 2003 Towards CP violation results from DØ Paul Balm, NIKHEF (for the DØ collaboration) EPS meeting July 2003, Aachen This.
All Experimenters MeetingDmitri Denisov Week of September 9 to September 16 D0 Summary  Delivered luminosity and operating efficiency u Delivered: 4.0pb.
All Experimenters MeetingDmitri Denisov Week of September 2 to September 8 D0 Summary  Delivered luminosity and operating efficiency u Delivered: 3.9pb.
FPD STATUS Carlos Avila Uniandes/UTA 1. FPD overview 2. Roman pot and detector status 3. FPD readout integration status 4. Software status 5. Stand-alone.
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
ScECAL Beam FNAL Short summary & Introduction to analysis S. Uozumi Nov ScECAL meeting.
CHIPP meeting Appenberg, 24 Aug 2009 Preparation for LHC beam, Jeroen van Tilburg 1/15 Jeroen van Tilburg (Universität Zürich) LHCb: Preparation for LHC.
FNAL All Experimenters Meeting June 24, Leslie Groer Columbia UniversityDØ Status 1 DØ Status and Operations June 17 – 24, 2002  Notable “features”
D0 PMG February 15, 2001 PMG Agenda February 15, 2001  Overview (Weerts) u Detector status u Reportable milestones u Summary  Operations Organization.
LAV thresholds requirements Paolo Valente. LAV answers for Valeri’s questions (old) 1.List of hardware to control (HV, LV, crates, temperatures, pressure,
LHCf Detectors Sampling Calorimeter W 44 r.l, 1.6λ I Scintilator x 16 Layers Position Detector Scifi x 4 (Arm#1) Scilicon Tracker x 4(Arm#2) Detector size.
Susan Burke DØ/University of Arizona DPF 2006 Measurement of the top pair production cross section at DØ using dilepton and lepton + track events Susan.
All Experimenters MeetingDmitri Denisov Week of December 23 to December 30 D0 Summary  Delivered luminosity and data taking efficiency u Delivered: 5.8pb.
Penny Kasper Fermilab Heavy Quarkonium Workshop 21 June Upsilon production DØ Penny Kasper Fermilab (DØ collaboration) 29 June 2006 Heavy Quarkonium.
DØ Beauty Physics in Run II Rick Jesik Imperial College BEACH 2002 V International Conference on Hyperons, Charm and Beauty Hadrons Vancouver, BC, June.
Arnd Meyer (RWTH Aachen) Dec 15, 2003Page 1 D0 Status Summary ● Have been in supervised access all week, until Sunday evening ● Intermittent trips (low.
Kaori Maeshima Fermilab Jan. 12, University Evolution: Search in High Mass Dilepton Channel in CDF Pre Run I Run 1A, Run 1B, and Run II......
4 May 2006May 06 PMG - Paul R1 AFEII-t Status May 4, 2006 PMG Paul Rubinov.
All Experimenters MeetingDmitri Denisov Week of July 22 to July 29 D0 Summary  Delivered luminosity and operating efficiency u Delivered: 3.1pb -1 u Live:
AEM, Sept Frank Chlebana (Fermilab) Status of the LHC and CMS Frank Chlebana (Fermilab) All Experimenter’s Meeting Sept
TOF status and LS1 plansTOF status and LS1 plans 27/06/2012.
Arnd Meyer (RWTH Aachen) July 21, 2003Page 1 Data Taking July 14 – 20 Day∫L dt (del)∫L dt (rec)Efficiency Mon 956nb nb % Wed 111nb nb.
Arnd Meyer (RWTH Aachen) Aug 1, 2003Page 1 Data Taking July 25 – Aug 1 Store # BeginLum [cm -2 s -1 ]Duration 2817Thu 22:354.0E hrs 2821Sat 1:513.8E31.
The MINER A Operations Report All Experimenters Meeting Howard Budd, University of Rochester Oct 14, 2013.
Arnd Meyer (RWTH Aachen) Sep 8, 2003Page 1 Integrated Luminosity Total data sample on tape with complete detector > 200pb -1.
Kevin Burkett Harvard University June 12, 2001
14 paesi, 62 universita’ e laboratori
The Silicon Track Trigger (STT) at DØ
Preparation of the CLAS12 First Experiment Status and Time-Line
Presentation transcript:

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 1 Tevatron and DØ Status and Plans Arnd Meyer, RWTH Aachen DØ Germany Meeting December 4 th, 2003

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 2 Data Taking Status Total data sample on tape with complete detector > 200pb -1...still waiting for the first > 200pb -1 analysis

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 3 Data Taking Status cont. ● Lab reached its goal of delivering 225pb-1 in FY03 Also for DØ: BD delivered ∫L dt = 227.7pb -1 in FY03 26pb -1 per month since May But had to run 6 weeks longer than hoped ● Cuts into our running time next year Long shutdown – difficult to anticipate rapid startup Six week shutdown next summer / fall ● We do not expect to get significantly more ∫L dt in FY04 than we got this year 25% pbar tax (Recycler commissioning), studies ⇒ pb -1 in FY04 See Dave Mc Ginnis' presentation on Oct 3 ADM FY02 FY03

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 4 FY 04 Luminosity Profile More (2x/week) and shorter (8 hours) accelerator study periods Studies only if >140 hrs of store time in the previous 14 days Higher delivered luminosity through improved stacking rate Improve stacking rate through shorter pbar production cycle time (2.4sec 1.7sec) 11.3mA/hr (FY03) 18 mA/hr (FY04) ≃ 3 months turn-on after shutdown (on schedule so far) pessimistic ∫L dt ≃ 233pb -1 design ∫L dt ≃ 328pb -1 Will know by the end of the year if Recycler work was successful (but can benefit only in FY2005)

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 5... and a Wishlist for End of FY04 You are here 3 ⋅ cm -2 s ● STT fully commissioned ● Missing pieces of CTT fully commissioned ● Taking data with rates of 2.5kHz / 1kHz / 50 Hz after L1 / L2 / L3 and 90% average efficiency ● CPS / FPS used in the trigger and for physics ● Most data quality problems are caught online ● 1-2 fewer people on shift ● Taking shifts and improving the detector is not considered a necessary evil ● We have 0.5fb -1 of good data on tape ● Reco takes 1sec/event on my 2-year old desktop

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 6 Data Taking Efficiency Shutdown Winter '03 Shutdown The Lucky Week Pre-shutdown special runs/ studies

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 7 Efficiency (Post-Shutdown) ● See some of the improvements in the machines already; e.g. lifetime at 150GeV in the Tevatron now hrs vs. few hours pre-shutdown (removed apertures limitations) ● Biggest problem: “messy” store terminations with large losses, quenches, and CDF losing a couple of Silicon ladders Not bad after 10 weeks of shutdown! ≃ 8 Stores so far Initial luminosities 0.7 – 9.0 – 8.6 – 15.9 – – 21.6 – 21.9 ⋅ cm - 2 s -1 Factor of 2 below the best stores before the shutdown

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 8 Data Taking Efficiency (pre-SD) ● Average data taking efficiency for 2003 is 86% ● Current upper limit is ≃ 95% 3-4% global front end busy 1%begin and end store transitions <1%run transitions ● Typically in the upper 80%'s for the last six months Since Beaune, “lost” 83.5 hours of store time (12.6% by time) 12 hours for special runs Largest single failure (4 hrs): low airflow trips of L1CAL on July 27/28. Fan belt replaced. Symptomatic: some of the largest downtimes are one-time occurences Failures by component (without special runs): –SMT: 12 hours –Muon/L1Muon: 12 hours (trips, readout errors/crashes, trigger problems) –CAL: 9 hours (mostly BLS power supplies and hot trigger); + 5 hours L1CAL –CFT/CTT/PS: 5 hours –L3/DAQ/Online: 4 hours Tracking crates readout collaborations' decision: L1A vs. FEB fairly optimized, contiuous effort to keep this low

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 9 Data Taking Efficiency cont. ● Much of the time running close to our desired efficiency to tape – 90% ● At the same time, data quality improves Number of conditions that causes us to stop data taking (automagically or manually) is continually increasing ● Credit to many (few) dedicated people! ● Large downtimes are discussed at the weekly operations meeting ● Several systems marginal in terms of expert coverage: one or no resident expert – no manpower for proactive improvements

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 10 Run II Bests Regularly updated: Best days by data taking efficiency –95.0% on June 22nd; so far 8 days with 93% or better efficiency Best runs and days by recorded luminosity (Aug 10, 488nb -1 ; May 4, 1.68pb -1 ) Best stores by initial DØ luminosity (Aug 10, 4.55 ⋅ cm -2 s -1 )

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 11 A “Typical” Store Wobbly L1 rates (CAL) Initial Lum. 3.9 ⋅ cm -2 s -1 Store lost (quench) 5-3% L1 Busy Present max. rate guidelines: Level 11.4 kHzFEB < 5%(5-10% headroom to account Level 2800 HzMuon r/o for rate fluctuations) Level 3 50 HzOffline(30% room)

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 12 “Typical” Store (Post-Shutdown) 7% L1 Busy at 1kHz 15% at 1.5kHz File transfers to FCC

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 13 Control Room Shifts ● It is a burden on the collaboration to fill that many shifts (and schedule them!) The shifter duty is 7 shifts / 6 months per person on masthead Only about 1 shift per month per person (on average!) ● Calorimeter and Muon shifts consolidated into CalMuon shifts since June Rocky at times, but overall OK Cost of additional training offset by savings in total number of shifts There is more training involved – took some time to be realized by “old” shifters ● Next natural choice for merging is SMT / CFT – will require initiative from detector groups (clear instructions, simplify, automate) ● More than a third of the collaboration have not yet taken a single shift in 2003 The fact that we are collecting data with high efficiency is to a large part due to the presence of 5 – 6 well trained people in the control room

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 14 Data Quality & Global Monitoring ● Online data quality monitoring consists of three (four) parts Significant Event System (“Slow Control”) –Catches an increasing number of hard- and software failures –In many cases pauses the run to ensure consistent data quality –Working very well, could use additional experts guidance DAQ Artificial Intelligence –Notifies shifter of abnormal conditions (global rate fluctuations, BOT trigger rate,...) –Automagically fixes certain problems (SCLinit), e.g. sync. problems in L2 Sub-detector monitoring examines –Many expert-level plots (but experts are not generally on shift) Global Monitoring –Trigger rates, Trigger Examine, Vertex Examine, Physics Examine ● Global Monitoring has great potential, but there are many issues – examples follow

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 15 – If we can't fill all shifts, need to think about merging with Captain's and other shifters' duties Global Monitoring cont. ● Technical Issues During the transition from trigger list v11 to v12, ran for weeks with wrong/bad reference plots (different triggers, then rapidly changing prescales) PhysEx uses random sample – should be based on certain triggers Low statistics (slow reconstruction) ● Psychological Issues Lack of interaction between detector shifters and GM – GM detects feature in Gtrack phi distribution – SMT shifter cannot correlate with his occupancy plots – Need effort from all detector groups to bring their expertise into GM plots (only Muon group has done this so far) ● Organizational Issues Shifts not being filled (e.g. 8 in August)

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 16 DQ & GM cont. ● LmTrigger urgently needs improvement For example averaging over different time periods, uncertainties, luminosity dependence of trigger cross sections,... Extremely important tool to identify problems quickly ● Overall, somewhat slow progress (remember Beaune?) ● If we want to continue reducing the number of shifters in the control room, GM needs a major effort (time, people, attention) From the collaboration – great task for groups new to DØ Automation should be the goal Need to catch all major problems online ● Up to one third of the data is thrown away in the analysis stage – sad! Everybody who discards data through “bad” or “good” run lists should make that extra step and think about how to catch the problems earlier!

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 17 “Summer” Shutdown ● Successful 10 week shutdown (Sep 8 – Nov 17) 7-8 weeks for experiments, 2-3 additional weeks with limited access ● Four scheduled power outages, a couple of unscheduled ones ● 24x7 DAQ shifts and day shift Captains – thankless task! (“Good God, please, someone, if you see me in the parking lot, run me over. Kill me. I am so bored to death.” - anonymous DAQ shifter) ● Major D0 goals for the shutdown Improve reliability: reduce access time, periods with incomplete detector etc. Improve quality of the data: reduce calorimeter noise, repair Silicon HDI's ● Some major accelerator tasks Recycler vacuum improvements, bakeout Tevatron alignment work, installation of Tev alignment network Replace rotting magnet stands Improve some aperture limitations (Tev, transfer lines), upgrade instrumentation

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 18 Post Shutdown Status ● Access went smoothly overall – no accidents, on schedule, great support Detector groups together with mechanical and electrical support groups have developed detailed job lists including detector opening and closing, allocation of manpower resources from detector groups and support teams, survey as needed during major moves ● First store on November 22 nd, as scheduled Took data within 3 minutes of first 36x36 store Quality data taking established with 2 nd 36x36 store ● Single days with about 90% data taking efficiency, already many runs with >94% efficiency Biggest downtimes: –Solenoid protection electronics failure (~4 hours) –MCH ↔ FCH switch failure (~2 hours) ● Comprehensive reviews of online and offline quality of the data taken after the shutdown December 12 th, 15 th, 19 th Identify more problems much earlier than at the “Bad Run List” level

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 19 Silicon Status Current Dose ● Cancellation of Silicon replacement means we must plan to operate current detector for the life of the experiment Layer 0 (which likely will be a part of the rebaselined Run II upgrade) without additional hits from the outer SMT layers is of little use Have to evaluate what steps can be taken to increase chances of the detector's reliable operation long term ● Bias scans (August): measure depletion one layer at a time Use tracks from CFT and other SMT layers to determine cluster charge as a function of HV Runs with HV varied between 0% and 100% in 10% steps SMT in full readout (no sparsification) Average over ladders (statistics) Results confirm expectation, but with large uncertainties

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 20 Silicon Status ● Main task during shutdown: repairs of failed HDI's / electronics Before shutdown: 136 disabled (up from “irreducible” 84 just after Jan shutdown) 12 are “definitely repaired” (2 weeks of 2 shifts/day with 4 people) 59 are “unstable” ● 112 HDI's are currently not powered 50 Ladders (11.6%) 30 F-wedges (10.4%) 32 H-wedges (16.7%) ● A few HDI's failed when magnet was energized ● All but 2-3 of the enabled HDI's participate in track reconstruction ● SMT is operating stably 1 st store after shutdown

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 21 Central Fiber Tracker ● Shutdown tasks Maintenance of LVPS's and installing upgraded LVPS (better connectors), maintenance of the VLPC He cooling system Major job: modification of AFE boards to remove unused SVX inputs from the readout Reduce data size and DAQ deadtime ● Known issues: channels corresponding to 10 SVX chips are not operational Swapped AFE boards – problems stays Found that problem appeared after last power outage with warm up of cryostat serious problem! ● Reconstruction is still using old (wrong) CFT maps – makes offline data quality checks hard

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 22 Calorimeter ● Replaced all large cooling fans for preamplifier cooling during shutdown ● Studies of calorimeter noise – access priority given to “noise task force” Characterize noises: 10MHz, 14.3MHz (RF/4), “Ring of Fire”,... Controlled power-up after power outages to identify sources Improve grounding ● “Ring of Fire” / “Welder Noise” Sudden burst of triggers/events Appears when a welder is triggered in DAB3 Other unidentified sources ● Entry in the cryostat identified Noise disappears almost entirely when temperature monitoring cables are disconnected

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 23 Calorimeter Noise ● RF/4 noise appeared when muon chambers switched back on Unstable Not really observed in data ● 10MHz noise from SMT sequencers Went around with a radio tuned to “WSMT” (10MHz) to identify sources ● Series of grounding tests (shutdown) Disconnect AC, safety ground, phone, etc. Visual inspection: found ≃ 10 contacts Attach current controlled power supply, slowly increase current up to 50A, and look for heat sources Found (and fixed) a few more problems Improved grounding reduces “welder noise” with temperature cables attached by about a factor of 2 ● Does all the work pay off? Robert: “Looks better than ever before!”

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 24 (New) Calorimeter Monitoring Occupancy/energy views to catch hot/cold zones 2 nd store after shutdown

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 25 Muon Systems Shutdown Tasks: ● Forward Muon First-time access to A layer forward muon tracker (8-12 hours for opening and closing) completed: replacement of preamplifiers, gas leaks, gas monitors; C layer repairs in progress Number of non-working channels now 0.15% (trigger counters), 0.5% (drift tubes) ● Central Muon Installed extra trigger counters under the detector – running into a few snags (tight clearance on east side) but no show-stoppers Installed 144 remote power cycle relays for front-end electronics Pulled a couple of wires drawing moderate to high currents for investigation Installed Power PC's in the remaining muon readout crates that had 68k's One PDT problem will require more than 4 hours access to repair ● Muon systems are collecting physics quality data – no known serious issues The infamous bottom hole

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 26 Luminosity System & FPD ● Luminosity system Cable work in the gaps Complete readout electronics installed (finally!). Required to reduce the embarassing 10% uncertainty on our luminosity measurement ● Forward proton detector I nstallation of electronics for full system operation, all 18 pots in 6 castles

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 27 Level 2 Upgrade ● All Level 2 Alphas have been replaced with Betas! Running smoothly so far ● Too fast for PDT readout code – had to slow down temporarily until firmware corrected ● Indications that the reason for increased front-end busy lies within the Level 2 system

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 28 Silicon Track Trigger ● Reminder The STT is part of the Level 2 Trigger System Based on L1CTT roads, refit tracks using SMT axial hits improved p T,, impact parameter Reduce background, improve p T resolution, cut on impact parameter All Run II papers CDF has published so far are based on their equivalent (SVT) ● By default 5 STT crates in the readout after the shutdown (none before) ● Not yet in trigger And unfortunately there's actually little point presently, with 1.4kHz L1A and 1kHz L2A

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 29 Trigger, DAQ, Online, General ● Firmware upgrades on L1CTT and L1Muon, maintenance on L1CAL ● Online & Controls Replaced 8 disks that have died over the last 2 years (4 disks for the data logger) Major online software upgrades – Python, Epics, VxWorks ● New Trigger Control Computer, Online IP shuffling, other upgrades/maintenance ● General detector maintenance – Air handlers, hydraulic systems, vacuum jackets, cooling water systems, ODH heads, etc. ● Old Cryo UPS replaced ●.... No time to talk about many other shutdown jobs!

Arnd Meyer (RWTH Aachen) Dec 4 th, 2003Page 30 Conclusions ● Detector is running well (better than some people want to make us believe) Data taking efficiency 86% for the year – “physics analysis efficiency”?? 216pb -1 integrated luminosity on hands with full detector in readout Progress in online data quality monitoring not as good as hoped for ● Shutdown went well and on schedule – a lot of work completed ● Came out of the shutdown well prepared First store on November 22 nd, as scheduled Took data within 3 minutes of first 36x36 store, quality data taking established with 2 nd 36x36 store ● Major worries L1 Bandwidth, data quality monitoring, diminishing manpower Disconnect between data taking and analysis Offline progress (processing/reprocessing) is slowing down physics output