Anthony Affolder UC Santa Barbara

Slides:



Advertisements
Similar presentations
1 US Testing Update-Anthony AffolderTracker Meeting, Feb 10, 2004 Faulty Channel Sources (TOB) Fault Sources (excluding cable breaks and CMN)  Hybrid-0.004%
Advertisements

Slide 1L3 Testing Meeting March 7, 2003UCSB Status-Anthony Affolder UCSB Testing Status Anthony Affolder (for the UCSB module testing group) Current testing.
Slide 1Module Testing Meeting March 7, 2003US Hybrid/Module/Rod Testing Model-Anthony Affolder US Hybrid/Module/Rod Testing Model Anthony Affolder (for.
Slide 1 Anthony Affolder US Module LT meeting June 17, 2004 UCSB Module LT Testing.
Slide 1X-Calibration Common Testing Issues-Anthony AffolderModule Testing Meeting, June 3, 2003 X-Calibration Common Testing Issues: Grounding, Environmental.
4/25/2006 Puneeth Kalavase, UC, Santa Barbara On behalf of US Tracker Outer Barrel group Construction and Testing of CMS “Rods”
1 CMN Problem Review-Anthony AffolderTPO, December 11, 2003 Review of CMN Problem/Studies Ariella requested me to review the current understanding of the.
US Module and Rod Production Overview and Plan For the US CMS Tracker Group.
Anthony AffolderUS CMS Meeting 12/3/2002Proposal of UCSB Short-term Testing Program Assumptions Electrical and optical hybrids not burnt-in prior to arrival.
Slide 1 Anthony Affolder US Silicon Meeting, March 9, 2004 Equipment Status ARCS equipment status à Single module and 4 hybrid DAQ equipment status à Vienna.
1 US Testing Status-Anthony AffolderModule Testing Meeting, Dec. 11, 2003 Update of US Testing Status Anthony Affolder On behalf of the US testing group.
UCSB Encapsulation Studies UC Santa Barbara Based upon a 6 week study by F. Garberson in collaboration with A. Affolder, J. Incandela, S. Kyre and many.
1 US Testing Update-Anthony AffolderTracker Meeting, Feb 10, 2004 US Module Testing Update Anthony Affolder (On behalf of the US testing group) Update.
Andrea Giammanco CMS Tracker Week April DS ROD Prototype: “final” optohybrids “final” CCUM integrated in the rod with new FEC_to_CCUM adapter (Guido.
Identifying anomalous strips David Stuart, Noah Rubinstein University of California, Santa Barbara June 18,
An offline look at TIF data David Stuart UC Santa Barbara May 2, 2007.
Performance of the DZero Layer 0 Detector Marvin Johnson For the DZero Silicon Group.
Oct, 2000CMS Tracker Electronics1 APV25s1 STATUS Testing started beginning September 1 wafer cut, others left for probing 10 chips mounted on test boards.
Updates on GEMs characterization with APV electronics K. Gnanvo, N. Liyanage, K. Saenboonruang.
Tracker Week, CERN, Oct Oct 2003Tracker Week - Module ProductionSalvatore Costa - Catania Executive summary from Bonding WG Meeting 21 Oct.
Fermilab PMG - Results from module testing - April 9, 2004 – E.Chabalina (UIC) 1 Results from module testing E.Chabalina University of Illinois (Chicago)
October, 2001CMS Tracker Electronics1 Module studies at IC OUTLINE laboratory setup description, APV I2C settings pulse shape studies (dependence on ISHA,
TRIPLEGEM and VFATs at The Test Beam Area TRIPLEGEM and VFATs at The Test Beam Area N. Turini……reporter Eraldo Oliveri! Eraldo Oliveri main worker! N.
Electrical Tests of Modules and Ladders Markus Friedl (HEPHY Vienna) 2 October 2014.
Roberto Chierici - CERN
Design of Digital Filter Bank and General Purpose Digital Shaper
CMS Module Testing & Quality Assurance
UCSB Hybrid Bonding & Testing
APV Lectures APV FE Basic Building Blocks How Our Testing Works
On behalf of the US TOB testing group
UCSB Testing Status Anthony Affolder
Update of US Testing Status
UCSB Testing Status Anthony Affolder
APV Lectures APV FE Basic Building Blocks How Our Testing Works
Executive summary from Bonding WG Meeting 21 Oct 2003
Review of CMN Problem/Studies
UCSB Short-term Testing Plan
CMS Module Testing Issues
UCSB Testing 3 Stereo Modules Tested 1 SS6 Module Tested
Hamburg Bonding Report
Results from module testing
A. Kaminskiy, CMS Tracker week , Apr 2004
Local database Local testing database seems necessary in current situation A common solution for UCSB/FNAL would be optimal Based in root, ascii file summaries,
Unexpected Failures of Modules on Rods
UCSB Testing Status Anthony Affolder
US Module Production Quality
Recent TOB Developments
Module Failures on RODs
Module Failures on RODs
FNAL Module Testing Status
On behalf of the US TOB testing group
Module production in Italy
US Testing Update Anthony Affolder (On behalf of the US testing group)
UCSB Short-term Testing Plan
UCSB Qualification Module Grading
First UCSB TEC Module (Pictures)
US Hybrid/Module/Rod Testing Model
US Module Testing Progress Report
Status of 44 ARCS tested TOB modules, June 24, 2004
Electrical Testing at UCSB: Hybrids, Modules, & Rods
Isolated Bad-CAC Strip at UCSB
TOB Module Production Overview and Plan
Recent TOB Developments
Hybrid & Module Testing Status
Hybrid & Module Testing Status
Recent TOB Developments
Module Failures on Rods
US Module Testing Progress Report
Hybrid & Module Testing Status
Equipment Status ARCS equipment status DAQ equipment status
Presentation transcript:

Anthony Affolder UC Santa Barbara X-Calibration Common Testing Issues: Grounding, Environmental Noise, and Coding Anthony Affolder UC Santa Barbara

Outline Motivation X-calibration study description Results Noise Measurements Pedestal Measurements Pulse Height Measurements Pinhole Test Gain Measurements Suggestions for bad channel description Analysis Macro Differences between ARCS/LT software Algorithmic Common mode subtraction (CMS) Input Parameters Bad Channel Definition

Motivation Modules produced/tested at a large number of sites worldwide Uniformity in testing results is necessary in such a distributed system X-calibration important step in achieving uniformity To achieve uniformity Common algorithms Common set of tests Common requirements Control over testing environment To initiate this process: Investigate testing issues relevant for x-calibration Grounding, environmental effects, etc. Look for differences in the ARCS and LT code Algorithms Common mode input parameters Requirements

X-Calibration Testing Study Attempt to derive “minimal” set of tests to find faulty channels as precursor to x-calibration Efficient, descriptive, redundant Tried to qualitatively describe effects of grounding and environmental noise sources How much common mode noise acceptable? How stable will testing be over 2 year period? Used first UCSB pre-production TOB module Examples of PA-sensor opens, sensor-sensor opens, pinholes, and high current channels Shorts have not yet been introduced Write-up available at hep.ucsb.edu/cms/xcalibration.ps

Test Setup ARCS system with new FE, LED system and 6.0b software Floating LV and HV supplies Clamshell Module holding plate in clamshell but isolated > 1cm from metal shell Grounding achieved with large gauge wire to hybrid-to-utri adaptors Four grounding schemes studied Both module holder and clamshell floating (Scheme 0) Module holder floating, clamshell grounded (Scheme 1) Module holder grounded, clamshell floating (Scheme 2) Both module holder and clamshell grounded (Scheme 3) Nearby gantry used as source for broadcast noise Test taken with/without gantry in operation

Noise Measurements (1) Scheme 1, gantry off Grounding schemes 1 & 3 give least amount of common mode noise Prefer scheme 1 as closer to ideal Faraday cage With low common mode noise (< 0.5 ADC) distinct noise levels for sensor-sensor and PA-sensor opens and pinholes Consistent levels in 8 modules tested at UCSB and 3 at FNAL with these grounding schemes PEAK ON Sensor flaw Noisy Strips Bad CAC Bad Istrip Sensor-sensor opens PA-sensor opens Pinholes

Noise Measurements (2) Scheme 1, gantry off Using grounding schemes 1 & 3, the distinct noise levels are also visible in all modes and inverter states tested

Noise Measurements (3) Even relatively low amounts of common mode noise makes noise levels of opens unpredictable Schemes 0 & 2 have higher common mode noise Presence of common mode noise indicated by difference in Raw Noise and CMS Noise in Peak Off mode Noise can be lower than, equal to, or higher than good channels Common mode noise also indicator of sensitivity to environmental noise sources Strongly suggest common mode noise in peak off mode required to be less than ~0.5 for testing Use multiple bad noise levels for the different faults Scheme 2, nearby gantry off Scheme 2, nearby gantry on

Pedestal Measurements Scheme 1, gantry off Pedestal measurements are not very sensitive to grounding/local noise sources But opens, shorts, pinholes, etc. not very different than good channels Scheme 0, gantry on Pedestal test not useful for finding bad channels

Pulse Height Measurement Scheme 2, gantry off Open channels differ from normal channels at same level as the non-uniformity of the calibration response Opens can easily be missed by the pulse height test Plots shows tighter ±12% bands Scheme 1, gantry off

Pinhole Test (Continuous LED) Pinhole test works exactly as designed Insensitive to grounding or local noise sources Two levels marked for bad channels Pinholes Noisy Strips Scheme 1, gantry off

Gain Measurements Scheme 0, gantry off Scheme 1, gantry on Gain measurements are insensitive to common mode noise and local noise sources Thanks goes to Aachen’s work in coding test Distinct gains for sensor-sensor and PA-sensor opens and pinholes Consistent levels in 8 modules tested at UCSB and 3 at FNAL Scheme 2, gantry on Scheme 3, gantry off

Analysis Macro Analysis root macro under development which correlates testing results to determine type of channel defects Ultimately will output list of bad channels with suggested repairs/rework necessary for module Additionally, the macro generates all plots necessary for module QA The macro (with directions and examples) is available at hep.ucsb.edu/cms/arcs_macro.html

Conclusions of Study Gain scan and pinhole tests least sensitive to grounding scheme and external noise sources Can be used to find sensor-sensor and PA-sensor opens, pinholes, noisy channels, and most likely shorts Noise measurement with “optimal” grounding is also insensitive to external noise sources Finds all the above faults These three tests can have results correlated to give great confidence to failure analysis Would like to propose the use of this set of tests for bad channel finding Other tests are less optimal, but are included as they are the standard as of now Pedestal test does not find common faults Pulse shape measurement much less sensitive than gain measurement Backplane pulsing tests and shorts still need to be included

ARCS/LT Differences (Algorithms) Pedestal and noise calculation (including common mode subtraction) algorithms are identical Pulse shape tests use common mode algorithms differently LT does not subtract common mode ARCS subtracts common mode excluding the charge injected channels Pipeline scans also treat the common mode differently LT applies common mode subtraction to both pedestal and noise Potential for missing bad columns of capacitors in APV ARCS applies common mode subtraction only to noise calculation

ARCS/LT Differences (CMS inputs) Parameters used for skipping bad channels in the common mode algorithm are different at different sites. Since skipped channels are marked as bad in ARCS setup , these parameters are very important. X-calibration work at Karlsruhe should find optimal set of parameters . HYBRID TESTS MODULE TESTS FHIT CERN ARCS LT Tskip 4.9 54 4.0 3.0 Tbad 55 5.0 RMS (low) 0.3 0.5 0.0 RMS (high) 10.0 1.2 Pskip 0.2

ARCS/LT Differences (Bad Channel Definitions) Partial listing of bad channel definitions used currently Guide ARCS LT US TOB Pedestal ±20% ±30% 0>Pi>500 ±20% (not applied) Noise CMS skipped channels 0>Ni>5 |Ni-<N>|>5sN Multiple fixed cuts Pulse height |PHi-<PH>|>3sPH ±12% (not applied) Gain slope N/A ±20% (Not sure if applied) Multiple percentage cuts Pinhole ±100% |Pini-<Pin>|>3sPin Backplane pulsing ?????? |Bi-<B>|>3sB Every site has different bad channel selection criteria. Now that a relatively large number of modules produced, it is time to try to converge on a set of bad channel definitions. Gain slope, pinhole, and backplane pulsing tests need systematic studies to determine optimal bad channel definitions, etc.

Code Comparison Conclusion The testing algorithms are very similar between ARCS/LT software Only slight differences in how common mode subtraction handled in pulse shape and pipeline test Parameters used in marking bad channels for common mode subtraction differ between stands X-calibration work at Karlsruhe should finalize these parameters Bad channel criteria differ for almost each testing site Now is the time to come to a “final” uniform set of criteria A common language/convention is needed to describe problem channels How to use test information to determine fault type should also be investigated more