Operation and performance of the ATLAS Semiconductor Tracker

Slides:



Advertisements
Similar presentations
H1 SILICON DETECTORS PRESENT STATUS By Wolfgang Lange, DESY Zeuthen.
Advertisements

ATLAS SCT Endcap Detector Modules Lutz Feld University of Freiburg for the ATLAS SCT Collaboration Vertex m.
May 14, 2015Pavel Řezníček, IPNP Charles University, Prague1 Tests of ATLAS strip detector modules: beam, source, G4 simulations.
A Fast Level 2 Tracking Algorithm for the ATLAS Detector Mark Sutton University College London 7 th October 2005.
LHC SPS PS. 46 m 22 m A Toroidal LHC ApparatuS - ATLAS As large as the CERN main bulding.
J. Leonard, U. Wisconsin 1 Commissioning the Trigger of the CMS Experiment at the CERN Large Hadron Collider Jessica L. Leonard Real-Time Conference Lisbon,
Atlas SemiConductor Tracker Andrée Robichaud-Véronneau.
1.ATLAS and ID 2.SCT 3.Commissioning 4.Integration 5.Latest Runs 6.Conclusions Commissioning the ATLAS Silicon Microstrip Tracker IPRD08 - Siena Jose E.
Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger David W. Miller on behalf of the ATLAS Collaboration 27 May th Real-Time.
ATLAS SCT module performance: beam test results José E. García.
Module Production for The ATLAS Silicon Tracker (SCT) The SCT requirements: Hermetic lightweight tracker. 4 space-points detection up to pseudo rapidity.
CMS Alignment and Calibration Yuriy Pakhotin on behalf of CMS Collaboration.
SSD Operations Manual March 2014 SSD: 4 th layer of vertex detector Heavy Flavor Tracker Silicon Strip Detector – Operations Manual PXL Inserted from this.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
The SLHC and the Challenges of the CMS Upgrade William Ferguson First year seminar March 2 nd
Pixel Upgrade Plans ROD/BOC PRR July 17, 2013 T. Flick University of Wuppertal.
The ATLAS Pixel Detector - Running Experience – Markus Keil – University of Geneva on behalf of the ATLAS Collaboration Vertex 2009 Putten, Netherlands,
1 Performance of the LHCb VELO Outline LHCb Detector and its performance in Run I LHCb Detector and its performance in Run I LHCb VELO LHCb VELO VELO performance.
Radiation Damage in the CMS Strips Tracker Christian Barth on behalf of the CMS Tracker Collaboration.
U N C L A S S I F I E D FVTX Detector Readout Concept S. Butsyk For LANL P-25 group.
Claudia-Elisabeth Wulz Institute for High Energy Physics Vienna Level-1 Trigger Menu Working Group CERN, 9 November 2000 Global Trigger Overview.
1 Digital Active Pixel Array (DAPA) for Vertex and Tracking Silicon Systems PROJECT G.Bashindzhagyan 1, N.Korotkova 1, R.Roeder 2, Chr.Schmidt 3, N.Sinev.
Cosmic tests and Performance of the ATLAS Semi-Conductor Tracker Barrels Bilge Demirköz Oxford University On behalf of the ATLAS SCT Collaboration.
Operation of the CMS Tracker at the Large Hadron Collider
Ideas about Tests and Sequencing C.N.P.Gee Rutherford Appleton Laboratory 3rd March 2001.
The ATLAS Inner Detector Steve McMahon / RAL. Welcome to CERN Geneva Airport LHC accelerator CERN main site SPS accelerator CERN 2nd site It is your laboratory.
Anatoli Romaniouk TRT Introduction TRT in ATLAS p. 2-4TRT in ATLAS p. 2-4 TRT design p. 5-7TRT design p. 5-7 TRT operation principles p. 8-9TRT operation.
Michał Dwużnik, for the SCT collaboration
G. Volpi - INFN Frascati ANIMMA Search for rare SM or predicted BSM processes push the colliders intensity to new frontiers Rare processes are overwhelmed.
The Status of the ATLAS Experiment Dr Alan Watson University of Birmingham on behalf of the ATLAS Collaboration.
LHCb front-end electronics and its interface to the DAQ.
ALICE Pixel Operational Experience R. Santoro On behalf of the ITS collaboration in the ALICE experiment at LHC.
All Experimenters MeetingDmitri Denisov Week of July 7 to July 15 Summary  Delivered luminosity and operating efficiency u Delivered: 1.4pb -1 u Recorded:
SCT Readiness for Heavy Ion Collisions Dave Robinson on behalf of SCT 15/9/101Dave Robinson Heavy Ion Review.
The RICH Detectors of the LHCb Experiment Carmelo D’Ambrosio (CERN) on behalf of the LHCb RICH Collaboration LHCb RICH1 and RICH2 The photon detector:
1 The ATLAS SemiConductor Tracker commissioning at SR1 APS and JPS joint conference October 30, 2006 Ryuichi Takashima ( Kyoto Univ. of Education ) For.
The Detector Performance Study for the Barrel Section of the ATLAS Semiconductor Tracker (SCT) with Cosmic Rays Yoshikazu Nagai, Kazuhiko Hara (Univ. of.
9/17/2008TWEPP 2008, R. Stringer - UC Riverside 1 CMS Tracker Services: present status and potential for upgrade Robert Stringer University of California,
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
The Detector Performance Study for the Barrel Section of the ATLAS Semiconductor Tracker (SCT) with Cosmic Rays Yoshikazu Nagai (Univ. of Tsukuba) For.
Printing: This poster is 48” wide by 36” high. It’s designed to be printed on a large-format printer. Customizing the Content: The placeholders in this.
Ideas for Super LHC tracking upgrades 3/11/04 Marc Weber We have been thinking and meeting to discuss SLHC tracking R&D for a while… Agenda  Introduction:
RD program on hybrids & Interconnects Background & motivation At sLHC the luminosity will increase by a factor 10 The physics requirement on the tracker.
Performance and operation experience of the Atlas Semiconductor Tracker and Pixel Detector at the LHC Ewa Stanecka PAS, Cracow for the ATLAS collaboration.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
1 4 July 2006 Alan Barr - SCT DAQ Experience and plans from running the (SCT) DAQ at SR1 HEP Cosmics setup Running modes Problems Future.
Operation & Performance of the ATLAS SemiConductor Tracker Dr. Petra Haefner Max-Planck-Institut für Physik.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
The BTeV Pixel Detector and Trigger System Simon Kwan Fermilab P.O. Box 500, Batavia, IL 60510, USA BEACH2002, June 29, 2002 Vancouver, Canada.
Performance of the LHCb Vertex Locator Thomas Latham (on behalf of the LHCb VELO group) 11/06/20111TIPP Chicago.
Atlas SemiConductor Tracker final integration and commissioning Andrée Robichaud-Véronneau Université de Genève on behalf of the Atlas SCT collaboration.
Iterative local  2 alignment algorithm for the ATLAS Pixel detector Tobias Göttfert IMPRS young scientists workshop 17 th July 2006.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
IOP HEPP Conference Upgrading the CMS Tracker for SLHC Mark Pesaresi Imperial College, London.
SCT readout limitation estimate with data
DAQ for ATLAS SCT macro-assembly
Experience with DAQ for ATLAS SCT
Silicon Pixel Detector for the PHENIX experiment at the BNL RHIC
Status report of the ATLAS SCT optical links
Integration and alignment of ATLAS SCT
CMS Tracker Operational Experience
5% The CMS all silicon tracker simulation
ATLAS Silicon Tracker commissioning
Bringing the ATLAS Muon Spectrometer to Life with Cosmic Rays
Operational Experience with the ATLAS Pixel Detector at the LHC
The CMS Tracking Readout and Front End Driver Testing
Installation, Commissioning and Startup of ATLAS & CMS Experiments
The LHCb Front-end Electronics System Status and Future Development
Compact Muon Solenoid Detector (CMS) & The Token Bit Manager (TBM)
Presentation transcript:

Operation and performance of the ATLAS Semiconductor Tracker Nick Barlow, University of Cambridge, On behalf of the ATLAS SCT collaboration.

Contents Overview of LHC/ATLAS/SCT Design Operational issues Performance Current activities

Introduction The Large Hadron Collider at CERN is the world’s highest energy particle accelerator. Beams of 4TeV protons can be made to collide head-on at 4 points around the ring, where particle detectors record the results of the collisions.

Introduction The Large Hadron Collider at CERN is the world’s highest energy particle accelerator. Beams of 4TeV protons can be made to collide head-on at 4 points around the ring, where particle detectors record the results of the collisions. ATLAS is the largest of these detectors, designed to study the Standard Model and search for new particles.

Introduction The Large Hadron Collider at CERN is the world’s highest energy particle accelerator. Beams of 4TeV protons can be made to collide head-on at 4 points around the ring, where particle detectors record the results of the collisions. ATLAS is the largest of these detectors, designed to study the Standard Model and search for new particles. The Inner Detector (ID) is the innermost part of ATLAS, to measure trajectories of charged particles (“tracks”).

ATLAS Inner Detector The ID consists of: Pixel detector Semiconductor Tracker (SCT) Straw tube tracker (TRT) All within 2T solenoidal B-field.

ATLAS Inner Detector The ID consists of: Pixel detector Semiconductor Tracker (SCT) Straw tube tracker (TRT) All within 2T solenoidal B-field. Pixel and SCT kept cold by evaporative cooling, using C3F8

ATLAS Inner Detector The ID consists of: Pixel detector Semiconductor Tracker (SCT) Straw tube tracker (TRT) All within 2T solenoidal B-field. Pixel and SCT kept cold by evaporative cooling, using C3F8 The SCT is made up of 4 cylindrical barrel layers and 9 endcap disks on each side.

ATLAS Inner Detector The ID consists of: Pixel detector Semiconductor Tracker (SCT) Straw tube tracker (TRT) All within 2T solenoidal B-field. Pixel and SCT kept cold by evaporative cooling, using C3F8 The SCT is made up of 4 cylindrical barrel layers and 9 endcap disks on each side. Consists of 4088 double-sided silicon modules. ‘p-in-n’ silicon strip sensors.

ATLAS Inner Detector and SCT The ID consists of: Pixel detector Semiconductor Tracker (SCT) Straw tube tracker (TRT) All within 2T solenoidal B-field. Pixel and SCT kept cold by evaporative cooling, using C3F8 The SCT is made up of 4 cylindrical barrel layers and 9 endcap disks on each side. Consists of 4088 double-sided silicon modules. ‘p-in-n’ silicon strip sensors. Each module side has: 768 aluminium strips (pitch=80 microns) (>6 Million channels in total!) 6 ABCD3TA ASIC readout chips. The 2 sides of a module have stereo angle of 40mrad between strip directions, in order to give 2D position of “hits”.

SCT readout / Data Acquisition (DAQ) Data from the SCT are read out by off-detector electronics in eight DAQ crates. Trigger signals received via optical link to Trigger and Timing Crate (TTC).

SCT readout / Data Acquisition (DAQ) Data from the SCT are read out by off-detector electronics in eight DAQ crates. Trigger signals received via optical link to Trigger and Timing Crate (TTC). Trigger and clock signals sent to modules along optical “TX” link (one fibre per module).

SCT readout / Data Acquisition (DAQ) Data from the SCT are read out by off-detector electronics in eight DAQ crates. Trigger signals received via optical link to Trigger and Timing Crate (TTC). Trigger and clock signals sent to modules along optical “TX” link (one fibre per module). Modules then return hit data along “RX” link (one fibre per side)

SCT readout / Data Acquisition (DAQ) Data from the SCT are read out by off-detector electronics in eight DAQ crates. Trigger signals received via optical link to Trigger and Timing Crate (TTC). Trigger and clock signals sent to modules along optical “TX” link (one fibre per module). Modules then return hit data along “RX” link (one fibre per side) Binary readout – either “1” or “0” in each 25ns time bin, depending on whether or not charge exceeded a configurable threshold.

SCT readout / Data Acquisition (DAQ) Data from the SCT are read out by off-detector electronics in eight DAQ crates. Trigger signals received via optical link to Trigger and Timing Crate (TTC). Trigger and clock signals sent to modules along optical “TX” link (one fibre per module). Modules then return hit data along “RX” link (one fibre per side) ROD assembles and formats data from 48 modules, then sends along “S-link” to ‘ROS’ (central ATLAS DAQ).

Redundancy in optical communications Two types of redundancy implemented, in case of failure of optical transmitter or damage to optical fibre. TX redundancy: module can receive clock and command signals electronically from neighbouring module. Cannot “daisy-chain” – if two adjacent modules lose optical input on TX line, we will lose data from at least one. RX redundancy: both sides of a module can be read out through one RX link. In most barrel modules, this involves bypassing one chip – lose data from 128 strips.

Timing Upon receipt of a trigger (via TX line), the ABCD will send back (along RX)data from the last three 25ns time bins in its “pipeline” If we are correctly “timed in”, middle one should correspond to same bunch crossing as event that fired the trigger. Need to adjust delay to trigger signal module-by-module to account for fibre lengths and time-of flight. Timing scan performed once or twice per year to find optimum delays.

LHC/ATLAS operations LHC delivered: 50ns bunch spacing. High instantaneous lumi leads to up to 40 pp interactions per bunch crossing (μ). High detector occupancy. Non-zero rate of Single Event Upsets (SEUs) LHC delivered: 40nb-1 in 2010 5fb-1 in 2011 22fb-1 in 2012 50ns bunch spacing. ATLAS Trigger system selects interesting events. “Level 1” hardware trigger, rate ~70kHz Detector subsystems must read out their data at this rate! Software-based High Level Trigger further reduces rate to 400Hz for data recording.

SCT operations 99.9% 99.6% 99.1% 99% of readout channels operational. Total Out-of-readout (Barrel/Endcap) Fraction (Total) Modules 4088 11/19 0.73% Chips 49056 38/11 0.10% Strips 6279168 4111/8020 0.21% SCT bias voltage maintained at safe 50V level until “Stable Beams” declared, at which point HV is ramped to 150V. Automated action in 2012, though with shifter oversight. Automatic recovery (reconfiguration) of modules giving persistent readout errors. Reconfiguration of all modules every 30 minutes during running (recover from SEUs). Year 2010 2011 2012 Lumi-weighted SCT good data fraction 99.9% 99.6% 99.1%

Operational issues – ROD busy If, for any reason, a ROD is not able to send data to ROS fast enough to keep up with L1 trigger rate, it can assert “BUSY”, which will stop triggers. Will then be automatically “stoplessly removed”, and can then be recovered by a shifter action. If >1 SCT ROD is out of the readout at the same time, data is considered “bad” for physics. Running at high trigger rates and large occupancies uncovered a flaw in our ROD firmware, such that a ROD could go BUSY if a small number of modules returned no data, or too much data, or nonsensical data. Several issues in ROD firmware were identified and fixed, but problem persisted until the end of the Run. High priority for detailed investigation during current shutdown!

Operational issues: CiS modules Approximately 25% of endcap modules manufactured by CiS (remainder manufactured by Hamamatsu). Slightly different design: In May 2012, we started observing strange behaviour from some of these modules. About 2hrs into high-lumi runs, leakage current would increase dramatically, and one side of module would become noisy. Eventually ROD would go BUSY, and/or module HV would trip. Could be mitigated in the short term by reducing HV from 150V.

CiS modules Mainly affected side 0 of “middle” modules. Inner Outer Side 0 Problem still not fully understood, but was mitigated for 2012 running by reducing “Standby” voltage (HV during inter-fill periods) from 50V to 5V for all CiS modules. Current would still increase during run, but would plateau before reaching problematic levels.

Operational issues – TX failures TX channels (each corresponding to one module) on TX plugins began failing in 2009. Analysis of failed units indicated ESD damage during manufacturing process. Replacements ordered, with improved ESD precautions during manufacturing, and installed. After ~3 months operation, replacements also began to fail. Some evidence that humidity was damaging the units. Replacements ordered from a different vendor, with improved resistance to humidity. TX failures continued during 2012 run (though at a lower rate). Believed to be due to thermal mismatch between epoxy and VCSEL array. Third full set of replacements, using a commercial VCSEL package (LightABLE engine), will be installed during the current shutdown period. Minimal quantity of data was lost due to this problem, due to provision of TX redundancy (modules can receive clock and command signals electronically from neighbouring modules), and vigilance of shifters and on-call experts.

Efficiency Efficiency well above 99% for all layers+sides! Define intrinsic hit efficiency as “hits-per- possible-hit”, i.e. ignore non-operational modules from both numerator and denominator. To measure efficiency of each module side, perform track fits ignoring that side, and then see if we see a hit. Efficiency well above 99% for all layers+sides!

Noise Too many fake hits from noise could impair the pattern recognition in tracking software. SCT was designed to have noise occupancy lower than 5x10-4. Occupancy can be measured either in standalone calibration runs, or as part of normal ATLAS data-taking (look in empty bunch-crossings). Noise is well within design limits. (At high luminosity, there are many more hits from low pT tracks (inc “loopers”) than from intrinsic noise.)

Tracking performance Up to 4,000 tracks per event in high-pile-up conditions seen in 2012! Excellent agreement between data and Monte Carlo simulation.

Alignment Track-based alignment is iterative process, where residuals (i.e. difference in position between hit-on-track and the track intersection point) are minimized. First align large structures, e.g. SCT wrt TRT and Pixel, then eventually go down to individual modules. Alignment in Barrel region, particularly in horizontal (“x”) direction, was already good in 2009 as a result of cosmics running.. SCT is now close to perfectly aligned.

Frequency Scanning Interferometer Laser-based alignment system, can precisely track distances between nodes. Large movements observed by FSI can alert track-based alignment team that a new set of constants may be needed.

Lorentz Angle Lorentz Angle Charge carriers Incidence Angle In absence of B-field, charge carriers produced by ionization of the silicon would be expected to travel in direction of E-field, i.e. perpendicular to surface of sensor. Solenoidal B-field will deflect charge carries by some angle – Lorentz Angle. Can be measured by looking at distribution of <cluster-size>-vs-incident angle

Radiation damage Ionising radiation can have several effects on silicon sensors, including: Increased leakage current. Charge trapping / defects Transition from n-type to p-type. FLUKA simulation indicates that dose received to date is still some way short of that required for type inversion.

Radiation damage Radiation damage will increase the leakage current across the silicon sensors. This can be measured by the Power Supplies, and compared to model predictions, as a function of dose received (using FLUKA and Hamburg/Dortmund model) Excellent agreement for barrel modules over several orders of magnitude! Agreement in endcap is less spectacular, but still within 20%. Radiation damage is not yet having a significant impact on the operating characteristics of SCT modules. Dose received does not necessitate keeping SCT cold during current shutdown period.

SCT DAQ Bottlenecks ROS x8 ROD/BOC pair x90 For LHC Run 2, will need to handle μ~80, and trigger rate of 100 kHz! S-link: 32bit wide transfer at 40MHz = 1.28Gbps Sufficient for 100kHz readout with m~30-40 ROS x8 ROD input decoder and FIFO: 512 deep ABCDs: 8-deep event buffer, read out at 40Mbps Sufficient for 100kHz L1 with m~87 ROD/BOC pair x90

Expanded ROD System after shutdown Improved data compression on ROD Bandwidth matches that of front-end S-link: 32bit wide transfer at 40MHz = 1.28Gbps Sufficient for 100kHz readout with m~87 ROS x12 ROD input decoder and FIFO: 512 deep ABCDs: 8-deep event buffer, read out at 40Mbps Sufficient for 100kHz L1 with m~87 ROD/BOC pair x128

Ongoing updates/improvements (cooling and TX transmitters) Evaporative cooling upgraded to use new thermosyphon system. Use 90m drop from surface to cavern to provide pressure. No moving parts – is expected to be more reliable then current compressor-based system. All TX plugins will be replaced again. Commercial VCSEL package, expected to be much more reliable.

Conclusions The SCT has performed extremely well during LHC Run 1. 99% readout channels operational. >99% data “good” for physics analysis. Efficiency and noise match or exceed design specifications, and effects of radiation damage are in good agreement with model predictions. Updates to DAQ system, cooling, and optical transmitters under way during the current shutdown. Expect to have even more robust and reliable system for LHC Run 2, able to deal with even higher occupancies and trigger rates.

Backup

Calibration ROD can also generate triggers for calibration. Several types of calibration, e.g.: Opto scans – make sure communication between BOC and modules is working well. Response Curve – ensure that hit threshold is correctly set to desired charge-on-strip. Noise occupancy – send lots of triggers, count the hits, to ensure SCT modules are operating within design parameters for noise. Typically have 1-2 hour periods available for calibration between LHC fills.

Readout

Location of CiS modules

Location of problematic CiS modules