5th April, 2005JEM FDR1 JEM FDR: Design and Implementation JEP system requirements Architecture Modularity Data Formats Data Flow Challenges : Latency.

Slides:



Advertisements
Similar presentations
TileCal Optical Multiplexer Board 9U VME Prototype Cristobal Cuenca Almenar IFIC (Universitat de Valencia-CSIC)
Advertisements

Uli Schäfer JEM Status and Test Results Hardware status JEM0 Hardware status JEM1 RAL test results.
Uli Schäfer JEM Status and plans Hardware status JEM0 Hardware status JEM1 Plans.
8 th Workshop on Electronics for LHC experiments - Colmar- September 9 th -13 th 2002Gilles MAHOUT Prototype Cluster Processor Module for the ATLAS Level-1.
5th April, 2005JEM FDR1 JEM FDR: Design and Implementation JEP system requirements Architecture Modularity Data Formats Data Flow Challenges : Latency.
Jet algorithm/FPGA by Attila Hidvégi. Content Jet algorithm Jet-FPGA – Changes – Results – Analysing the inputs Tests at RAL Summary and Outlook.
JET Algorithm Attila Hidvégi. Overview FIO scan in crate environment JET Algorithm –Hardware tests (on JEM 0.2) –Results and problems –Ongoing work on.
GOLD Generic Opto Link Demonstrator Uli Schäfer 1.
ATLAS L1 Calorimeter Trigger Upgrade - Uli Schäfer, MZ -
Uli Schäfer 1 BLT – status – plans BLT – backplane and link tester Recent backplane test results Test plans – week June 15.
Phase-0 topological processor Uli Schäfer Johannes Gutenberg-Universität Mainz Uli Schäfer 1.
Level-1 Topology Processor for Phase 0/1 - Hardware Studies and Plans - Uli Schäfer Johannes Gutenberg-Universität Mainz Uli Schäfer 1.
Uli Schäfer 1 S-L1Calo upstream links architecture -- interfaces -- technology.
Uli Schäfer 1 (Not just) Backplane transmission options Upgrade will always be in 5 years time.
Cluster Processor Module : Status, test progress and plan Joint Meeting, Mainz, March 2003.
Uli Schäfer JEM Plans Status (summary) Further standalone tests Sub-slice test programme JEM re-design Slice test.
Uli Schäfer JEM0 Status (summary) 3 JEM0s up and running: JEM0.0 used for standalone tests only (Mainz) JEM0.1 fully qualified module0 JEM0.2 (like JEM0.1.
Uli Schäfer 1 JEM PRR Design changes Post-FDR tests FDR issues.
Uli Schäfer 1 (Not just) Backplane transmission options.
5th April, 2005JEM FDR1 Energy Sum Algorithm In all stages saturate outputs if input is saturated or arithmetic overflow occurs Operate on 40Mb/s data.
S. Silverstein For ATLAS TDAQ Level-1 Trigger updates for Phase 1.
Uli Schäfer 1 JEM1: Status and plans Hardware status Firmware status Plans.
Uli Schäfer 1 JEM: Status and plans Pre-Production modules Status Plans.
Uli Schäfer JEM Status and plans Firmware -Algorithms -Tools -Status Hardware -JEM1 -Status Plans.
Uli Schäfer 1 JEM: Status and plans JEM1.2 Status Test results Plans.
Uli Schäfer 1 (Not just) Backplane transmission options Uli, Sam, Yuri.
22nd March, 2005CPM FDR, Architecture and Challenges1 CPM architecture and challenges oCP system requirements oArchitecture oModularity oData Formats oData.
Uli Schäfer 1 JEM1: Status and plans JEM1.1 Status Plans.
Uli Schäfer 1 CP/JEP backplane test module What’s the maximum data rate into the S-CMM for phase-1 upgrade ?
Uli Schäfer 1 Production modules Status Plans JEM: Status and plans.
Uli Schäfer 1 JEM1: Status and plans Hardware status Firmware status Plans.
Uli Schäfer 1 JEM Test Strategies Current plan: no JTAG tests at R&S  initial tests done at MZ Power-up / currents Connectivity tests (JTAG) per (daughter)
Uli Schäfer JEM Status and plans RAL test results Hardware status Firmware Plans.
Uli Schäfer JEM1 In input modules T,S probably mix of φ-bins 5,6 due to routing problems of high-speed LVDS links With current algorithm rounding errors.
Uli Schäfer JEM hardware / test JEM0 test programme Mainz standalone RAL sub-slice test JEM re-design Heidelberg slice test.
Uli Schäfer 1 Production and QA issues Design Modules have been designed, with schematic capture and layout, in Mainz (B.Bauss) Cadence design tools, data.
Uli Schäfer 1 JEM1: Status and plans power Jet Sum R S T U VME CC RM ACE CAN Flash TTC JEM1.0 status JEM1.1 Plans.
Uli Schäfer 1 (Not just) Backplane transmission options.
Uli Schäfer JEM Status and plans Firmware Hardware status JEM1 Plans.
Uli Schäfer JEM Status and plans Algorithms Hardware JEM0, JEM1 Tests Plans.
Uli Schäfer 1 Production modules Status Plans JEM: Status and plans.
Uli Schäfer 1 FPGAs for high performance – high density applications Intro Requirements of future trigger systems Features of recent FPGA families 9U *
Status and planning of the CMX Philippe Laurens for the MSU group Level-1 Calorimeter Trigger General Meeting, CERN May 24, 2012.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
Leo Greiner IPHC meeting HFT PIXEL DAQ Prototype Testing.
Status of Global Trigger Global Muon Trigger Sept 2001 Vienna CMS-group presented by A.Taurok.
CPT Week, April 2001Darin Acosta1 Status of the Next Generation CSC Track-Finder D.Acosta University of Florida.
Uli Schäfer 1 JEM Status and plans Hardware -JEM1 -Status Firmware -Algorithms -Status Plans.
CMX status Yuri Ermoline for the MSU group Mini-TDAQ week, CERN, 9-11 July 2012,
Status and planning of the CMX Wojtek Fedorko for the MSU group TDAQ Week, CERN April , 2012.
S. Rave, U. Schäfer For L1Calo Mainz
Ideas about Tests and Sequencing C.N.P.Gee Rutherford Appleton Laboratory 3rd March 2001.
ATLAS Trigger / current L1Calo Uli Schäfer 1 Jet/Energy module calo µ CTP L1.
Samuel Silverstein Stockholm University CMM++ firmware development Backplane formats (update) CMM++ firmware.
KLM Trigger Status Barrel KLM RPC Front-End Brandon Kunkler, Gerard Visser Belle II Trigger and Data Acquistion Workshop January 17, 2012.
CERN, 18 december 2003Coincidence Matrix ASIC PRR Coincidence ASIC modifications E.Petrolo, R.Vari, S.Veneziano INFN-Rome.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
11 October 2002Paul Dauncey - CDR Introduction1 CDR Introduction and Overview Paul Dauncey Imperial College London.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
Samuel Silverstein, SYSF ATLAS calorimeter trigger upgrade work Overview Upgrade to PreProcessor MCM Topological trigger.
Trigger for MEG2: status and plans Donato Nicolo` Pisa (on behalf of the Trigger Group) Lepton Flavor Physics with Most Intense DC Muon Beam Fukuoka, 22.
Samuel Silverstein, Stockholm University For the ATLAS TDAQ collaboration The Digital Algorithm Processors for the ATLAS Level-1 Calorimeter Trigger.
ATLAS calorimeter and topological trigger upgrades for Phase 1
L1Calo Phase-1 architechure
ATLAS L1Calo Phase2 Upgrade
ECAL OD Electronic Workshop 7-8/04/2005
The Xilinx Virtex Series FPGA
CPM plans: the short, the medium and the long
The Xilinx Virtex Series FPGA
(Not just) Backplane transmission options
Presentation transcript:

5th April, 2005JEM FDR1 JEM FDR: Design and Implementation JEP system requirements Architecture Modularity Data Formats Data Flow Challenges : Latency Connectivity, high-speed data paths JEM revisions JEM implementation details Daughter modules Energy sum algorithms FPGA resource use Performance Production tests

5th April, 2005JEM FDR2 JEP system requirements Process –4.9 < η < 4.9 region ~32×32×2 = 2k trigger towers of Δη×Δφ=.2×.2 9 bit input data (0-511 GeV) 32x32 10-bit “jet elements” after em/had pre-sum 2 multiplications per jet element: E T  (E X,E Y ) 3 Adder trees spanning the JEP (JEMs, CMMs) Sliding window jet algorithm, variable window size within 3×3 environment Output data to CTP Thresholded E T, E T Jet hit count Output data to RODs Intermediate results, mainly captured from module boundaries RoI data for RoIB

5th April, 2005JEM FDR3 JEP system design considerations Moderate data processing power Tough latency requirements Large amount of signals to be processed  partition into parallel operating modules Algorithm requiring environment to each jet element  high bandwidth inter-module lanes Data concentrator functionality, many  few Severely pin bound design, dominated by input connectivity Modules Processors (FPGAs) Benefit from similarities to cluster processor Common infrastructure (Backplane) Common serial link technology

5th April, 2005JEM FDR4 System modularity Two crates, each processing two quadrants in φ  32 × 8 bins (jet elements) per quad η range split over 8 JEMs  4 × 8 jet elements per JEM Four input processors per JEM Single jet processor per JEM Single sum processor per JEM

5th April, 2005JEM FDR5 Replication of environment elements - system and crate level - JEM has 32 core algorithm cells 4 × 8 jet elements Directly mapped : 4 PPMs (e,h)  1 JEM JEM operates on a total of 77 jet elements including ‘environment’ : 7 × 11 Replication in φ via multiple copies of PPM output data Replication in η via back- plane fan-out

5th April, 2005JEM FDR6 JEM data formats – real-time data JEM Inputs from PPM: Physical layer : LVDS, 10 bits, 12-bit encoded w. start/stop bit D0 odd parity bit D(9:1) 9 bit data, D1 = LSB= 1 GeV Jet elements to jet processor: No parity bit D(9:0) 10 bit data, D0 = LSB= 1 GeV 10 data bits muxed to 5 lines, least significant first Energy sums to sum processor: No parity bit E T (11:0) 12 bit data, D0 = LSB= 1 GeV E X (13:0) 14 bit data, D0 = LSB=.25 GeV E Y (13:0) 14 bit data, D0 = LSB=.25 GeV JEM output to CMM: J(23:0) 8 x 3 bit saturating jet hits sent on bottom port J24 odd parity bit S(23:0) 3 x 8 bit quad-linear encoded energy sums on top port 6 bit energy 2 bit range Resolution 1GEV, 4 GeV, 16 GeV, 64 GeV S24 odd parity bit

5th April, 2005JEM FDR7 JEM data formats - readout Physical layer : 16bits, 20-bit encoded (CIMT, alternating flag bit, fill-frames 1A/1B, HDMP 1022 format) Event separator : Minimum of 1 fill-frame sent after each event worth of data All data streams odd parity protected (serial parity) DAQ readout : 67-long stream per L1A / slice being read out Input data on D(14:0) : 11 bit per channel, nine bit data, 1 bit parity error, 1 bit link error 12 bit Bcnum & 25 bit sum & 25 bit jet hits on D15 RoI readout : 45-long stream per L1A D(1:0) : total of 8ROIs 2 bits location & saturation flag & 8 bits threshold passed D2 : 12 bits Bcnum D(4:3) : used on FCAL JEMs only (forward jets) D(15:5) : always zero

5th April, 2005JEM FDR8 JEM data flow LVDS deserialiser Input processor Jet processor + readout controller To CMM 400 Mbit/s serial data (480 Mbit/s with protocol) 40 MHz parallel 80 Mb/s 40 Mb/s parallel Sum processor + readout controller Link PHYTo CMMLink PHY 640 Mbit/s serial data (800 Mbit/s with protocol) Not synchronous to bunch clock Multiple protocols and data speeds and signaling levels used throughout board Multiplexing up and down takes considerable fraction of latency budget Re-synchronisation of data generally required on each chip and board boundary FiFo buffers Phase adjustment w. firmware-based detection Delay scans 40Mb/s

5th April, 2005JEM FDR9 Challenges : latency & connectivity Latency budget for energy sum processor:18.5 ticks (TDR) Input cables : ~2 ticks CMM : ~ 5 ticks Transmission to CTP <2 ticks  ~ 9.5 ticks available on JEM from cable connector to backplane outputs to CMM Module dimensions imposed by use of common backplane Large module : 9U*40cm Full height of backplane used for data transmission due to high signal count  long high-speed tracks unavoidable  need to use terminated lines throughout  need to properly adjust timing High input count : 88 differential cables

5th April, 2005JEM FDR10 Connectivity : high-density input cabling 24 4-pair cable assemblies arranged in 6 blocks of 4 (2 φ bins × em, had) Same coordinate system now on cables and crate: φ upwards, η left to right (as seen from front) V cable rotated Different cabling for FCAL JEMs  re-map FCAL channels in jet FPGA firmware

5th April, 2005JEM FDR11 Connectivity : details of differential data paths Differential 100Ω termination at sink 400 (480) Mbit/s input data Use de-serialisers compatible to DS92LV1021 (LVDS signal level, not DC-balanced) 88 signals per JEM arriving on shielded parallel pairs Run via long cables (<15m) and short tracks (few cm) Require pre-compensation on transmitting end 640 (800) Mbit/s readout data PECL level  electro-optical translator HDMP1022 protocol, 16-bit mode Use compatible low-power PHY

5th April, 2005JEM FDR12 Connectivity : details of single ended data paths CMOS signals point-to-point 60Ω DCI source termination throughout on all FPGAs 40Mb/s (25ns) at 1.5V, no phase control Energy sum path into sum processor : 40 lines per input processor General control paths At 2.5V : CMM merger signals via backplane (phase adjustment on receiving end) 80Mb/s (12.5ns) at 1.5V : jet elements 7x11x5bit =385 lines into jet processor 2x3x11x5bit=330 lines on backplane from/to adjacent modules Global phase adjustment via TTCrx All signals latched into jet processor on same clock edge

5th April, 2005JEM FDR13 JEM history JEM0.0 built from Dec LVDS de-serialiser DS92LV input processors covering one phi bin each, Spartan2 Main processor performing jet and energy algorithms, Virtex-E Control FPGA, ROC, HDMP1022 PHY, coaxial output Complete failure due to assembly company JEM 0.x built from Dec Minor design correction wrt to JEM0.0 New manufacturer (PCB / assembly ) Fully functional prototype except CAN slow control and FPGA flash configuration TTC interface not to specs due to lack of final TTCrx chip Successfully tested all available functionality

5th April, 2005JEM FDR14 JEM 0 11 input processors Main 88 x DS92LV1224 ROC VME-Interface 2 x HDMP1022 Backplane Conn. TTCrx CAN

5th April, 2005JEM FDR15 JEM history (2) JEM1.0 built in 2003 All processors Virtex-2 Input processors on daughter modules (R,S,T,U) LVDS de-serialiser SCAN (6-channel) 4 input processors covering three phi bins each 1 Jet processor on main board 1 Sum processor on main board 1 Board control CPLD (CC) Readout links (PHY & opto) on daughter module (RM) Flash configurator : system ACE Slow control / CAN : Fujitsu microcontroller Successfully tested algorithms and all interfaces Some tuning required on SystemACE clock CAN not to new specs (L1Calo common design)

5th April, 2005JEM FDR16 History: JEM 1.0 power Jet Sum R S T U VME CC RM ACE CAN Flash TTC JEM1.0 successfully tested Algorithms All interfaces LVDS in FIO inter-module links Merger out Optical readout VME CAN slow control Mainz, RAL slice test, CERN test beam

5th April, 2005JEM FDR17 JEM 1.1 JEM1.1 in production now Identical to JEM 1.0 Additional daughter module: Control Module (CM) CAN VME control Fan-out of configuration lines Expected back from assembly soooon

5th April, 2005JEM FDR18 JEM details –main board 9U*40cm*2mm, bracing bars, ESD strips, shielded b’plane connector 4 signal layers incl. top, bottom, 2*Vcc, 4*GND  total 10 layers Micro vias on top, bottom, buried vias All tracks controlled impedance : controlled / measured by manufacturer Single ended 60Ω Differential 100Ω Point-to-point links only All hand-routed 60Ω DCI source termination on processors (CMOS levels) Power distribution All circuitry supplied by local step-down regulators, fused 10A (estimated maximum consumption < 5A on any supply, 50W tot.) 10A capacity, separate 1.5V regulator for daughter modules Defined ramp-up time (Virtex2 requirement) staged bypass capacitors, low ESR VME buffers scannable 3.3V (DTACK: open drain 3*24mA), short stubs on signal lines, mm Vccaux for FPGAs : dedicated quiet 3.3V Merger signals (directly driven by processors) on 2.5V banks FPGA core and inter-processor and inter-module links 1.5V

5th April, 2005JEM FDR19 JEM details –main board (2) Timing TTC signals terminated and buffered (LVPECL, DC) near backplane TTCdec module with PLL and crystal clock automatic backup DESKEW1 bunch clock used as a general purpose clock Low skew buffers (within TTCdec PLL loop) with series terminators DESKEW2 clock used for phase-controlled sampling 80Mb/s jet element data (local & FIO) on jet processor only VME Synchronised to bunch clock Sum processor acts as VME controller Basic pre-configure VME access through CM Readout located on RM (ROCs on sum and jet processor) DCS/CAN located on CM (except PHY - near backplane) Configuration via SystemACE / CF P2P links to keep ringing at bay Multiple configurations, slot dependent choice

5th April, 2005JEM FDR20 JEM details –main board (3) JTAG available on most active components. Separate chains FPGAs (through SystemACE) Non-programmable devices on input daughters TTCdec and Readout Module Buffers Control Module JTAG used for Connectivity tests at manufacturer & MZ CPLD configuration FPGA configuration (ACE)

5th April, 2005JEM FDR21 Input modules 24 LVDS data channels per module 12 layer PCB with micro vias Impedance controlled tracks 60 Ω single ended 100 Ω differential LVDS signals entering via 100Ω differential connector on short tracks (<1cm) Differential termination close to de-serialiser 4 × SCAN channel de-serialiser PLL and analogue supply voltage only (3.3V) supplied from backplane Digital supply from step-down regulator on main board Reference clock supplied via FPGA XC2V1500 input processor 1.5V CMOS 60Ω DCI signals to sum and jet processor SMBus device for Vcc and temperature monitoring (new)

5th April, 2005JEM FDR22 Readout Module RM 2 channels, 640 Mb/s 16bit  20 bit CIMT coded, fill-frame FF1, alternating flag bit, as defined in HDMP1022 specs 2xPHY, 2xSFP opto transceiver, so far 2-layer boards High-speed tracks <1cm PHYs tested: HDMP1022 serialiser 2.4W/chip (reference, tested in 16-bit and 20-bit mode) HDMP1032A serialiser 660mW/chip, 80pc (16-bit) TLK1201A serdes 250mW/chip, < 80pc, uncoded, requires data formatter firmware in ROC (16-bit, 20-bit) Successfully run off bunch clock Converted to Xtal clock due to unknown jitter situation on ATLAS TTC clock Problems with Xtal clock distribution to ROI PHY (RAL, MZ) RM seems to work with clock linked from DAQ PHY to ROI PHY  Want a local crystal oscillator on RM  Need new iteration of RM (HDMP1032A, TLK1201A)

5th April, 2005JEM FDR23 Control Module CM Combines CAN/DCS, VME pre-configure access and JTAG fanout CAN Controller to L1Calo specs now (common design for all processors, see CMM/CPM Link to main board via SMBus only (Vcc, temperatures) VME CPLD (pinout error corrected) generating DTACK for all accesses within module sub- address range to avoid bus timeout Providing basic access for FPGA configuration via VME configuration reset ACE configuration selection / slot dependent ACE configuration selection via VME Buffers for SystemACE-generated JTAG signals to FPGAs TTCdec parallel initialisation (ID from geographical address)

5th April, 2005JEM FDR24 JEM 40pcs Main board, 10 layer PCB 21,393 € Assembly 7,657 € Components 49,725 € Sub-total 78,775 € Input module, 12 layer PCB 11,625 € Assembly 6,500 € Components 43,000 € Sub-total 61,152 € Total 139,927€ +control + readout + SFP

5th April, 2005JEM FDR25 Energy Sum Algorithm In all stages saturate outputs if input is saturated or arithmetic overflow occurs Operate on 40Mb/s data from LVDS de-serialisers : 88 channels per JEM, 9-bit E T data, parity, link error Latch incoming data on bunch clock, 2 samples per tick Select the stable sample under VME control Automatic phase detection in f/w (remove that feature ?) Delay scan (VME) Correct for upstream latency differences, up to 3 ticks (shift register, VME controlled) Send data to readout and spy circuitry Zero data on parity error Apply channel mask Sum up electromagnetic and corresponding hadronic channel to 10-bit jet element Multiplex jet elements to 80Mb/s and send to jet processor and backplane

5th April, 2005JEM FDR26 Energy Sum Algorithm (2) Threshold jet elements and sum to E T (12 bits, 1GeV resolution) Threshold jet elements and multiply (cosφ,sinφ),.25GeV resolution Sum to 2*14 bit (E X,E Y ) missing energy vector Transmit (E X,E Y,E T ) to sum processor Calculate board-level total vector sum Quad-linear encoding to 8 bit each 6-bit value and 2-bit range indicator Resolution 1,4,16,64 GeV, full scale 4 TeV Send 25 bits of data incl. odd parity bit D(24) to backplane

5th April, 2005JEM FDR27 FPGA resources used Fully synchronous designs, I/O Flip-flops used on all data lines Input FPGAs XC2V1500-4FF896C Slice Flip Flops: 27% LUTs: 59% total IOBs 90% Block RAMs: 68% Multipliers 50% GCLKs: 12% DCMs: 12%40.6MHz SUM FPGA XC2V2000-4BF957C Slice Flip Flops: 7% LUTs: 11% total IOBs 83% Block RAMs: 12% GCLKs: 25% DCMs: 12%42.8MHz

5th April, 2005JEM FDR28 Performance All interfaces and the algorithms have been tested on JEM1.0 in Mainz, at the RAL slice test and in the CERN test beam. Problems revealed: SystemACE configuration fails if incoming clock or TCK signal are of insufficient quality : signal distortions confirmed  re-layout of crystal clock and TCK distribution on JEM1.1 At CERN 2 out of 4 PPR channels could not be received error- free : signal distortions confirmed  modifications required on the PPR LCD module Errors observed on ROI readout only recently : problems with on-JEM crystal clock distribution confirmed  re-layout of readout module, use local clock Apart from the above problems all interfaces and the algorithms have shown to work error free in all tests

5th April, 2005JEM FDR29 Test setup Up to 3 JEMs in a 9U crate allowing for FIO tests either direction, along with VMM, TCM, CMM (and CPMs!) Control: Concurrent CPU on VMM or via flat cable External data sources for TTC : TTCvx, TTCvi, TTCex (CERN/RAL) via TCM LVDS : 1 DSS 16-channel (MZ) Several DSS (RAL) LSM (RAL) PPR (CERN) (4 channels) External data sinks for Merger signals : 2 CMMs (RAL) Readout path: Complete ROS (RAL) G-link tester with f/w pattern comparison (MZ)

5th April, 2005JEM FDR30 Test strategies Test the full system including all interfaces and algorithms at moderate statistics. Generally use physics-like test vectors  Requires operation of a ROS and data comparison on a computer. Therefore even in relatively long test runs very low bit error rates would go undetected Test interfaces with firmware-based test adapters and on-JEM diagnostic firmware allowing for real-time detection of pattern errors  These tests will reveal even low-level errors quickly Choice of test patterns - have a look at possible failure mechanisms: FIO data and merger data on backplane source-terminated lines at moderate speed: no signal dispersion expected nor observed 800 Mb/s readout data : due to optical transmission no dispersion expected nor observed LVDS links : the pre-compensation circuitry is required to compensate at a single time constant only, well below a single bit period. At the receiving end a slight overshoot should be observed  no inter-symbol interference expected on neither of the transmission lines. Main source of errors: system noise. Any non-constant data pattern should do. Use binary counter pattern. Useful on serial links: Has long stretches of many ones / many zeroes Has transitions all-one to all-zero Easy to detect errors

5th April, 2005JEM FDR31 System test at RAL (slice test) Setup with 2-stage merging in a single crate: DSS  JEM  crate CMM  system CMM  ROD  ROS  ROD  ROS Comparing readout data against simulation. ROD type : 6U modules Data format : old format (6U module specific) Results (June 2004): Data taken up to 5 slices of JEM DAQ data. Trigger rate up to 60kHz, 4*10 6 events analysed, no errors observed on JEM readout.

5th April, 2005JEM FDR32 Interface tests At RAL: Playback from JEM (ramps) into CMM (parity detection). Merger signals crossing 2/3 of backplane length: no error in bits In Mainz: FIO tests 3 JEMs (ramps, pattern comparison on central JEM) : no error in bits LVDS input tests Source : DSS, 16 inputs exercised at a time, pattern comparison (ramp) in input module : no error in bits Readout link tests : G-link tester with pattern comparison(ramp) no error in bits (problems with crystal clock from jet processor)

5th April, 2005JEM FDR33 FIO tests : delay scan All data latched into jet processor on a common clock edge Sweep TTCrx delay setting, 104ps steps Measure data errors on each channel : 10 bits, 5 signal lines Single channel 8ns error free All channels 6.5ns error free

5th April, 2005JEM FDR34 latency Latency Energy path 183ns Jet path 234ns < 9.5BC

5th April, 2005JEM FDR35 CERN test beam Within a wider test setup the following modules were available to generate / analyse JEM ‘test vectors’ based on true calorimeter signals PPR  JEM  CMM  CTP  ROD  ROS ROD type : 6U modules Data received from PPR error free on 2 channels Readout from PPR not possible  could not verify input signal integrity except parity error check Energy sum signal processing verified internally

5th April, 2005JEM FDR36 Test beam results Sum algorithm error-free (see effects of quad- linear encoding) Input data Eem+Ehad Energy sum to CMM

5th April, 2005JEM FDR37 Production tests Boundary scan at manufacturer: high coverage due to large fraction of scannable components  verify connectivity (static test) Standalone tester for input module LVDS inputs, pattern comparison in firmware (high statistics) Standalone tester for readout module, pattern comparison in firmware (high statistics) DCI operation verified w. oscilloscope (drive unterminated 50 Ω cable into scope, record pulse shape)  dynamic test System-level tests in Mainz : 1 crate, 1 JEM supplied with LVDS data at a time, playback and spy facilities used to generate / capture data on board boundaries. FIO delay scan High statistics FIO BER tests, pattern detection in firmware, test full crate at a time with maximum activity on LVDS, VME, readout System-level tests at CERN