12-Sep-2005C.Youngman / GTT group1 An introduction to the GTT Topics covered:  Why a GTT?  Adding the GTT to the ZEUS trigger.  Interfacing data sources.

Slides:



Advertisements
Similar presentations
Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.
Advertisements

J. Linnemann, MSU 4/15/ Global L2 Outputs (and inputs) James T. Linnemann Michigan State University NIU Trigger Workshop October 17, 1997.
Freiburg Seminar, Sept Sascha Caron Finding the Higgs or something else ideas to improve the discovery ideas to improve the discovery potential at.
Digital Filtering Performance in the ATLAS Level-1 Calorimeter Trigger David Hadley on behalf of the ATLAS Collaboration.
1 Version 3 Module 8 Ethernet Switching. 2 Version 3 Ethernet Switching Ethernet is a shared media –One node can transmit data at a time More nodes increases.
The Silicon Track Trigger (STT) at DØ Beauty 2005 in Assisi, June 2005 Sascha Caron for the DØ collaboration Tag beauty fast …
Track quality - impact on hardware of different strategies Paola FTK meeting Performances on WH and Bs   2.Now we use all the layers.
SCT Offline Monitor Measuring Module Hit Efficiencies Helen Hayward University of Liverpool.
1 Computer System Overview OS-1 Course AA
July 7, 2008SLAC Annual Program ReviewPage 1 High-level Trigger Algorithm Development Ignacio Aracena for the SLAC ATLAS group.
Top Trigger Strategy in ATLASWorkshop on Top Physics, 18 Oct Patrick Ryan, MSU Top Trigger Strategy in ATLAS Workshop on Top Physics Grenoble.
ISOC Peer Review - March 2, 2004 Section GLAST Large Area Telescope ISOC Peer Review Test Bed Terry Schalk GLAST Flight Software
Layer 2 Switch  Layer 2 Switching is hardware based.  Uses the host's Media Access Control (MAC) address.  Uses Application Specific Integrated Circuits.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Vertex Reconstructing Neural Networks at the ZEUS Central Tracking Detector FermiLab, October 2000 Erez Etzion 1, Gideon Dror 2, David Horn 1, Halina Abramowicz.
Emlyn Corrin, DPNC, University of Geneva EUDAQ Status of the EUDET JRA1 DAQ software Emlyn Corrin, University of Geneva 1.
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Simulation issue Y. Akiba. Main goals stated in LOI Measurement of charm and beauty using DCA in barrel –c  e + X –D  K , K , etc –b  e + X –B 
Tracking at LHCb Introduction: Tracking Performance at LHCb Kalman Filter Technique Speed Optimization Status & Plans.
Technical Part Laura Sartori. - System Overview - Hardware Configuration : description of the main tasks - L2 Decision CPU: algorithm timing analysis.
ZEUS MVD and GTT Group: ANL, Bonn Univ., DESY-Hamburg -Zeuthen, Hamburg Univ., KEK- Japan, NIKHEF, Oxford Univ., Bologna, Firenze, Padova, Torino Univ.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
3rd April 2001A.Polini and C.Youngman1 GTT status Items reviewed: –Results of GTT tests with 3 MVD-ADC crates. Aims Hardware and software setup used Credit.
IOP HEPP: Beauty Physics in the UK, 12/11/08Julie Kirk1 B-triggers at ATLAS Julie Kirk Rutherford Appleton Laboratory Introduction – B physics at LHC –
1 DAQ Update MEG Review Meeting, Feb. 17 th 2010.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
L3 DAQ Doug Chapin for the L3DAQ group DAQShifters Meeting 10 Sep 2002 Overview of L3 DAQ uMon l3xqt l3xmon.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
L3DAQ An Introduction Michael Clements 5/5/03. A Quick Glance The DZero DAQ L3DAQ components L3DAQ communication An in depth look at monitoring Michael.
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Hardeep Bansil (University of Birmingham) on behalf of L1Calo collaboration ATLAS UK Meeting, Royal Holloway January 2011 Argonne Birmingham Cambridge.
Charged Particle Multiplicity, Michele Rosin U. WisconsinQCD Meeting May 13, M. Rosin, D. Kçira, and A. Savin University of Wisconsin L. Shcheglova.
Online Consumers produce histograms (from a limited sample of events) which provide information about the status of the different sub-detectors. The DQM.
REGISTER TRANSFER LANGUAGE (RTL) INTRODUCTION TO REGISTER Registers1.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
July 22, 2002Brainstorming Meeting, F.Teubert L1/L2 Trigger Algorithms L1-DAQ Trigger Farms, July 22, 2002 F.Teubert on behalf of the Trigger Software.
Chapter 11 System Performance Enhancement. Basic Operation of a Computer l Program is loaded into memory l Instruction is fetched from memory l Operands.
News and Related Issues Ted & Kirsten May 27, 2005 TDWG News since last meeting (April 29th) Organization Issues: how to improve communication Future Plans:
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
BEACH 04J. Piedra1 SiSA Tracking Silicon stand alone (SiSA) tracking optimization SiSA validation Matthew Herndon University of Wisconsin Joint Physics.
DAQ Selection Discussion DAQ Subgroup Phone Conference Christopher Crawford
Super BigBite DAQ & Trigger Jens-Ole Hansen Hall A Collaboration Meeting 16 December 2009.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
DAQ and Trigger for HPS run Sergey Boyarinov JLAB July 11, Requirements and available test results 2. DAQ status 3. Trigger system status and upgrades.
CLAS12 DAQ & Trigger Status
Project definition and organization milestones & work-plan
LHC experiments Requirements and Concepts ALICE
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Trigger, DAQ, & Online: Perspectives on Electronics
Charged Particle Multiplicity in DIS
CSCI 315 Operating Systems Design
Charged Particle Multiplicity in DIS
The Silicon Track Trigger (STT) at DØ
Example of DAQ Trigger issues for the SoLID experiment
I/O Systems I/O Hardware Application I/O Interface
Operating Systems Chapter 5: Input/Output Management
CS703 - Advanced Operating Systems
Commissioning of the ALICE-PHOS trigger
Lecture 15: Memory Design
Charged Particle Multiplicity in DIS
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
The CMS Tracking Readout and Front End Driver Testing
August 19th 2013 Alexandre Camsonne
Presentation transcript:

12-Sep-2005C.Youngman / GTT group1 An introduction to the GTT Topics covered:  Why a GTT?  Adding the GTT to the ZEUS trigger.  Interfacing data sources.  GTT hardware and software.  Source data sizes and latencies.  Barrel algorithm, how it works and results.  DQM, simulation, implementing new versions.  Constants database.  Acknowledgements. For more details check the GTT component web page.

12-Sep-2005C.Youngman / GTT group2 Why a GTT? Conceptual development path (1999)  How can the MVD be included in the trigger? MVD participation in GFLT not feasible, readout latency too large. Participation at GSLT possible, but MVD track and vertex information poor due to low number planes.  Expand scope to include data from other tracking detectors - a Global Track Trigger: Initially with the CTD - overlap with barrel detectors Later with the STT - overlap with the wheel detectors  Trigger aims: Initially implement an improved CTD-only trigger (z-vertex, tracks, Pt, invariant masses, etc.), then add MVD hits to further improve trigger quantities, and eventually extent the trigger to the forward region. e±e± 27.5 GeV p 920 GeV

12-Sep-2005C.Youngman / GTT group3 The ZEUS trigger ZEUS trigger designed 1990  First high rate pipelined system  Three trigger levels GFLT first level  custom electrionics  ECL connections  all components are interfaced GSLT second level  INMOS Transputer (TP) 25MHz  serial connections 2.5MB/s  subset of components connected EVB  PC based  serial and network connections  distributes GSLT decision  all components connected TLT third level  PC farm ~offline filter software  network connections Event Builder Third Level Trigger Offline Tape Global Second Level Trigger GSLT Accept/Reject Global First Level Trigger GFLT Accept/Reject CAL Front End OtherComponents CALSLT CALFLT Event Buffers 5  s pipeline CALFCLR GFLT FCLR Abort OtherComponents 15Hz 60Hz 500Hz 10 7 Hz 300Hz Accept rate 4.4μs ~10ms ~1s~1s~1s~1sLatency or time available 8-64 ~1000 ~100 pipeline Event buffers availableasynchronous event ordered asynchronous NOT event ordered synchronous clock driven

12-Sep-2005C.Youngman / GTT group4 GTT driven trigger deadtime Synchronization  GFLT control and data signals sent to and returned by the component's FLT sub-systems are locked to the HERA clock. Deadtime  Occurs when the GFLT is disabled from issuing new accept triggers.  Components disable the GFLT trigger when a buffer full condition is about to occur; they reenable it when sufficient buffer space is available. GTT latency (L) can give deadtime (lowest component event buffer count) (FCLR accept rate) If L > 7 / 300 = 23 ms the GTT will contribute to deadtime - this calculation is not accurate - the smallest possible latency must be targetted. Event Builder Third Level Trigger Offline Tape Global Second Level Trigger GSLT Accept/Reject Global First Level Trigger GFLT Accept/Reject CAL Front End OtherComponents CALSLT CALFLT Event Buffers 5  s pipeline CALFCLR GFLT FCLR Abort OtherComponents 15Hz 60Hz 500Hz 10 7 Hz 300Hz Accept rate 4.4μs ~10ms ~1s~1s~1s~1sLatency or time available 8-64 ~1000 ~100 pipeline Event buffers availableasynchronous event ordered asynchronous NOT event ordered synchronous clock driven L=

12-Sep-2005C.Youngman / GTT group5 Fitting the GTT into the trigger What the GTT has to do: 1.Components push data on GFLT accept and no FCLR abort to GTT 2.GTT computes decision 3.GTT sends decision to GSLT 4.Receive GSLT trigger decision 5.On GSLT accept send banks to EVB Time requirements:  GTT decision latency at the GSLT must not be significantly worse than the CTD- SLT latency envelope 15Hz 60Hz 500Hz 10 7 Hz Event Builder Third Level Trigger Offline Tape Global Second Level Trigger GSLT Accept/Reject Global First Level Trigger GFLT Accept/Reject CAL Front End OtherComponents CALSLT CALFLT Event Buffers 5  s pipeline CALFCLR GFLT FCLR Abort OtherComponents 300Hz 8-64 ~1000 ~100 pipeline 4.4μs ~10ms ~1s~1s~1s~1s Accept rate Latency or time available Event buffers available GTT CTD latency at GSLT mean = 13.6 ms tail < 44ms Run 54314

12-Sep-2005C.Youngman / GTT group ZEUS operational CTD SLT finalized GTT startup End source: S.Cittolin Computing and communication trends 1992 CTD-SLT implementation  CPU = 16 x 25 MHz TP network = 400 MHz  10 kB data latency = 10 x 2.5 MB/s = 2.5 ms 2000 GTT implementation  dual 1 GHz CPUs farm  10 kB data latency = 10 x 10 MB/s = 0.25 ms  1-2 switches 2005 GTT implementation  dual 4 GHz CPUs farm  10 kB data latency = 10 x 100 MB/s = 0.03 ms  many switches We have the 2000 implementation

12-Sep-2005C.Youngman / GTT group7 Component interfaces CTD (and STT)  insert splitter TPs into component TP network to duplicate data stream  Send data to Nikhef 2TP VME modules in single VME crate  PPC VME CPU reads data via TPM  CPU sends complete event data to GTT MVD  VME ADCs buffer data (3 crates)  On GFLT accept PPC VME CPU read data  CPU sends crate data to GTT For CTD and STT the Nikhef 2TP module marks the boundary between component and GTT. They have to boot their side executable. CTD or STT LOCAL FLT GSLT DIGI- TIZED DATA BUFFER EVB GTT MVD DATA PIPELINE DATA PIPELINE CLUSTER FIFO STRIP FIFO GFLT TP SPLITER LOCAL SLT INTER- FACE INTER- FACE RESULT AND DATA BUFFERS OTHER COMPONENTS INTER- FACE GFLT ACCEPT GSLT DECISION

12-Sep-2005C.Youngman / GTT group8 GTT hardware  MVD readout 3 Motorola MVME MHz  CTD/STT interfaces NIKHEF-2TP VME-Transputer Motorola MVME MHz  PC farm 12 DELL PowerEdge 4400 Dual 1GHz  GTT/GSLT result interface Motorola MVME MHz  GSLT/EVB trigger result interface DELL PowerEdge 4400 Dual 1GHz DELL PowerEdge 6450 Quad 700 MHz  Network switches 2 Intel Express 480T Fast/Giga 16 ports.  Thanks to Intel Corp. who provided switch and PowerEdge hardware via Yale grant. CTD/STT interface MVD readout PC farm and switches GTT to GSLT interfaceEVB and GSLT decision

12-Sep-2005C.Youngman / GTT group9 GTT software The GTT is the MVD SLT sub-system; there is no GTT on RCO just MVD ! The GTT algorithm process running on each host contains the following threads:  MVD0 data source … reads MVD upper barrel cluster event data  MVD1 data source … reads MVD lower barrel cluster event data  MVD2 data source … reads MVD wheel cluster event data  CTD data source …... reads CTD axial and stereo event data  CTDZ data source …. reads CTD z-by-time event data  STT data source …… reads STT event data  Barrel algorithm …… uses MVD0, MVD1, CTD and CTDZ data for track finding  Forward algorithm … uses MVD2 and STT data for track finding  Timeout thread …….. forces a timeout result to be sent to the GSLT if 30-40ms exceeded  Main thread  Shutdown thread ….. receives shutdown signal  GSLT thread ………. receives GSLT trigger result, sending banks to EVB on accept. A complete desciption of the GTT process software can be found on the GTT web page. Note that the STT and Forward algorithms were run, thru 49858, they are not currently enabled and no results associated with them will be shown.

12-Sep-2005C.Youngman / GTT group10 Source event data sizes Mean data size:  MVD0 = 1.3 kB  MVD1 = 1.4 kB  MVD2 = test run (usually < MVD0)  CTD = 4.1 kB  CTDZ = 1.1 kB Event size data cutoff used to control latency:  CTD = CTDZ = 10kB  MVDx = 8kB (MVD used in GTT)  MVDx = 2kB (MVD not used in GTT) If the cutoff is exceeded no data is sent from the source, just a synchronization header. Runs

12-Sep-2005C.Youngman / GTT group11 Source event data sizes Cluster cutoff (kB) MVD0 events cut (%) MVD1 events cut (%) MVD0&MVD1 events cut (%) MVD cutoff issues:  the percentage of GSLT passthru events cut by MVD0+1 cutoff is, see table  the percentage (fraction x 100) of dijet events cut is similar, see plots.  noise in the MVD and background in HERA has to be controlled. CTD cutoff issues:  Cutoff larger than max. seen, i.e. no cut.  ideal for GTT as no acceptance problems. smaller size = smaller latency GSLT passthru sample Dijet sample

12-Sep-2005C.Youngman / GTT group12 Source data latencies Mean data latency:  MVD0 = 0.7 ms  MVD1 = 0.7 ms  MVD2 = test run (usually < MVD0)  CTD = 7.0 ms  CTDZ = 4.4 ms CTD and CTDZ data are delayed by transfer times in TP networks. Arrival of source data with different delays reduces network contension. CTD and CTDZ latency will drive GTT latency. Latency (delay) of data at GTT after GFLT accept Runs

12-Sep-2005C.Youngman / GTT group13 Network transfer speeds Network transfer speed from interface to GTT Runs Kinks in network transfer speeds are seen at 1 kB boundaries  It's faster to send slightly more than N kB. Source transfer speed size dependent  MVD < 1 kB = 3 MB/s > 1 kB = 10 MB/s (i.e. FastEthernet)  CTD < 1 kB = 5 MB/s > 1 kB = 10 MB/s (i.e. FastEthernet) Not fully understood, but no problem

12-Sep-2005C.Youngman / GTT group14 The barrel algorithm Tracking in CTD is based on CTD-SLT algorithm, but includes all data including stereo data not available to the CTD-SLT - so it cannot be worse. Track finding strategy used:  Pass 1 (improved CTD-SLT result) axial segment finding 2D axial track finding 3D track finding with z-by-time space points improve 3D track finding with stereo segments calculate the z-vertex  Pass 2 (add MVD information to pass 1 tracks) refit stereo segments using calculated z-vertex recalculate vertex refit tracks including MVD information recalculate vertex Pass 2 is currently being implemented. Pass 1 and 2 results will be available to the GSLT. The following results will be a mixture of both !!

12-Sep-2005C.Youngman / GTT group15 Barrel reconstruction: clean event Tracks:  yellow = GTT found track  red = VCTRAK CTD vertex track  blue = VCTRAK CTD non-vertex track Good agreement

12-Sep-2005C.Youngman / GTT group16 Barrel reconstruction: a not so clean event Tracks:  yellow = GTT found track  red = VCTRAK CTD vertex track  blue = VCTRAK CTD non-vertex track Some tracks missed, but vertex found.

12-Sep-2005C.Youngman / GTT group17 Barrel reconstruction: a busy event Tracks:  yellow = GTT found track  red = VCTRAK CTD vertex track  blue = VCTRAK CTD non-vertex track Tracks missed and different ones found, possibly two verticies - but probably OK.

12-Sep-2005C.Youngman / GTT group18 Barrel track finding efficiency GTT track finding efficiency, compared to offline VCTRAK tracks, is higher for clean events. Including MVD hits into the algorithm highlighted a number of bugs in the CTD only tracking, these have been corrected. clean events - pass 2 busy events - pass 2

12-Sep-2005C.Youngman / GTT group19 Barrel primary z-vertex Primary vertex uses CTD-SLT overlapping bin method.  bin track Z 0 weighted by #stereo segments  use most probable bin as initial vertex  z-vertex from wt. Tracks within 9cm of initial vertex Dijet sample  resolution 3.7 cm  efficiency ~95%; 2-3% a secondary vertex is found. Addition of the MVD is expected to improve the vertex resolution to ~1.5 cm. Improved methods are being finalized. Varying the vertex cut drives purity and background contamination. CTD only

12-Sep-2005C.Youngman / GTT group20 Barrel performance MVD latency at GSLT mean = 9.5 ms low tail exists Barrel algorithm time requirement  Data decoding and preparation ~ 0.2 ms  Processing: mean ~1ms, max < 10 ms GTT latency at the GSLT is acceptable  9.5 ms mean is better than the CTD's 13.6 ms  tail is slightly longer  adding Pass 2 is not expected to change this  GTT latency driven by data source latency

12-Sep-2005C.Youngman / GTT group21 Barrel results sent to the GSLT GTSBEV bank  1st row is Pass 1 result  2nd row is Pass 2 result (if present)  trigger quantities: Primary vertex Number of tracks found Pt of highest 2 tracks Pt sum of vertex tracks Background word J-psi mass D0 mass  Flag3 contains important control bits: timeout flag, etc. GTDSEV bank  Precut event data sizes Both GTSBEV and GTDSEV banks are written offline for use in trigger checks. It is likely that more information will be added.

12-Sep-2005C.Youngman / GTT group22 Impact on GSLT trigger Of 72 active trigger slots at GSLT (recent 05_v72)  12 use GTT tracking information ( Z vtx, N vtxtrk, N trk, P t, …)  24 use GTT supplied input component datasize cutoffs  Precise quantification difficult due to slot duplication and multiple similar slot definitions, plus unclear usage in final analysis. Physics group usage  HFL: HFG01 (copy of HFL1), HFG02 (copy of HFL3), HFG05 (copy of HFL5 feedback?), HFG07 (copy of HFL7), and HFGB1 (lower rate HFL1)  EXO: EXG07 (beamgas testbed feedback?)  MU: MUG05 (MU01 at high lumi)  QCD: HPG13 thru HPG17 (test triggers?) Conclusions:  Currently there is a lot of activity on adding the GTT by the physics groups.  As the GTT barrel result is much better than the CTD-SLT the later will be disabled after the shutdown.

12-Sep-2005C.Youngman / GTT group23 DQM, simulation, implementing new versions. Standard GSLT passthru and physics input data sets available for testing new versions on and offline. Libraries available for ZGANA/czar trigger simulation. GTT DQM plots included in CTD tcbol web page. These are currently being improved. Well checked new version can be introduced without errors. The stability of GTT results can be monitored.

12-Sep-2005C.Youngman / GTT group24 Constants The effect on the GTT of changing CTD contants (drift velocity, etc.) has been investigated A web tool is now available which allows the database to be modified by the CTD expert when significant changes are found.

12-Sep-2005C.Youngman / GTT group25 Acknowledgements Who is currently working on what:  N.Copola - GSLT trigger  M.Sutton - DQM, simulation, barrel Pass1  V.Roberfroid - CVS, barrel Pass2  C.Youngman - GTT software  B.Straub - trigger implementation, performance measurement  J.Ferrando - J/psi Who has worked on what:  B.Dunne - J/psi  A.Polini - source interfaces  M.Bell - vertexing, performance measurements  P.Alfrey - trigger implementation, performance measurments  D.Gladkov - forward algorithm  S.Dhawan - hardware  R.Hall-Wilton - trigger implementation

12-Sep-2005C.Youngman / GTT group26 Integrating the STT? Data size and latency issues  tests in 2003 showed that the data size, and hence latency at the GTT was often large. This has to be addressed by the STT group.  The potency of a forward trigger at the GSLT also needs (re)evaluating Is enough time and effort available?  It has taken nearly 12 months to get a baseline CTD+MVD algorithm available. CTD STT data size (kB) latency (ms) data size (kB) latency (ms) Latency: T(first frame at GTT) - T(GFLT accept)