J. Linnemann, MSU 6/21/2015 1 Agenda L2 monitoring and readout Jim Linnemann 20 min Magic Bus Transceiver Dan Edmunds 20 min Plans for the GL2 Card Drew.

Slides:



Advertisements
Similar presentations
Computer Organization, Bus Structure
Advertisements

GCT Software Jim Brooke GCT ESR, 7 th November 2006.
Michigan State University 4/15/ Simulation for Level 2 James T. Linnemann Michigan State University NIU Triggering Workshop October 17, 1997.
J. Linnemann, MSU 4/15/ Global L2 Outputs (and inputs) James T. Linnemann Michigan State University NIU Trigger Workshop October 17, 1997.
Bob Hirosky, UVa 7/27/01  Level 2 Processor Status  Bob Hirosky The University of Virginia 
1 CONGESTION CONTROL. 2 Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because.
1/1/ / faculty of Electrical Engineering eindhoven university of technology Processor support devices Part 1:Interrupts and shared memory dr.ir. A.C. Verschueren.
GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
Mobile Resource Manager v2. Core Pillars  Engine - High fuel costs, vehicle maintenance  Productivity - Customers expect increasing levels of service.
ALICE Trigger System Features Overall layout Central Trigger Processor Local Trigger Unit Software Current status On behalf of ALICE collaboration:D. Evans,
Run IIb L1CAL TCC Philippe Laurens MSU 9-Feb-2006.
Trigger Framework Tutorial Dan Edmunds Michigan State University 14-Apr-2009 The purpose of this talk is to present the general features of the Trigger.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 11: Monitoring Server Performance.
Chapter 14 Chapter 14: Server Monitoring and Optimization.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. Parallel Programming in C with MPI and OpenMP Michael J. Quinn.
Michigan State University 6/28/ L2 I/O Transfer James T. Linnemann Michigan State University Trigger Meeting February 20, 1998.
DAQ WS03 Sept 2006Jean-Sébastien GraulichSlide 1 DDAQ Trigger o Reminder: DAQ Trigger vs Particle Trigger o DAQ Trigger o Particle Trigger 1) Possible.
Michigan State University 7/12/ The Standard L2 Crate James T. Linnemann Michigan State University FNAL L2 Workshop December 19, 1997.
DØ L1Cal Trigger 10-th INTERNATIONAL CONFERENCE ON INSTRUMENTATION FOR COLLIDING BEAM PHYSICS Budker Institute of Nuclear Physics Siberian Branch of Russian.
Using the Trigger Test Stand at CDF for Benchmarking CPU (and eventually GPU) Performance Wesley Ketchum (University of Chicago)
1 CLEO PAC 11/March/00 M. Selen, University of Illinois CLEO-III Trigger & DAQ Status Trigger Illinois (Cornell) DAQ OSU Caltech Cornell.
MICE CM26 March '10Jean-Sebastien GraulichSlide 1 Detector DAQ Issues o Achievements Since CM25 o DAQ System Upgrade o Luminosity Monitors o Sequels of.
Operating Systems.  Operating System Support Operating System Support  OS As User/Computer Interface OS As User/Computer Interface  OS As Resource.
06/15/2009CALICE TB review RPC DHCAL 1m 3 test software: daq, event building, event display, analysis and simulation Lei Xia.
1 Modelling parameters Jos Vermeulen, 2 June 1999.
SVT workshop October 27, 1998 XTF HB AM Stefano Belforte - INFN Pisa1 COMMON RULES ON OPERATION MODES RUN MODE: the board does what is needed to make SVT.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 11: Monitoring Server Performance.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Chapter 3 Parallel Algorithm Design. Outline Task/channel model Task/channel model Algorithm design methodology Algorithm design methodology Case studies.
IceCube DAQ Mtg. 10,28-30 IceCube DAQ: “DOM MB to Event Builder”
Technical Part Laura Sartori. - System Overview - Hardware Configuration : description of the main tasks - L2 Decision CPU: algorithm timing analysis.
G.Corti, P.Robbe LHCb Software Week - 19 June 2009 FSR in Gauss: Generator’s statistics - What type of object is going in the FSR ? - How are the objects.
Nikos Varelas University of Illinois at Chicago L2Cal Group at UIC: Mark Adams Bob Hirosky Rob Martin Nikos Varelas Marc Buehler (graduate student) James.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 11: Monitoring Server Performance.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
Lecture 4 TTH 03:30AM-04:45PM Dr. Jianjun Hu CSCE569 Parallel Computing University of South Carolina Department of.
ATLAS Trigger / current L1Calo Uli Schäfer 1 Jet/Energy module calo µ CTP L1.
McGraw-Hill©The McGraw-Hill Companies, Inc., 2004 Connecting Devices CORPORATE INSTITUTE OF SCIENCE & TECHNOLOGY, BHOPAL Department of Electronics and.
L3 DAQ Doug Chapin for the L3DAQ group DAQShifters Meeting 10 Sep 2002 Overview of L3 DAQ uMon l3xqt l3xmon.
DØ Online16-April-1999S. Fuess Online Computing Status DØ Collaboration Meeting 16-April-1999 Stu Fuess.
Electronic Analog Computer Dr. Amin Danial Asham by.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Chapter 13 – I/O Systems (Pgs ). Devices  Two conflicting properties A. Growing uniformity in interfaces (both h/w and s/w): e.g., USB, TWAIN.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
WHQL Code Coverage Prototype Program Andy Wen. 2 Agenda What is Code Coverage Prototype Program? What is Code Coverage Prototype Program? A prototype.
Online monitor for L2 CAL upgrade Giorgio Cortiana Outline: Hardware Monitoring New Clusters Monitoring
New L2cal hardware and CPU timing Laura Sartori. - System overview - Hardware Configuration: a set of Pulsar boards receives, preprocess and merges the.
L2mu System - Status SLICs Track Finding (main data processing) Next: SLIC Upstream and Downstream DONE UPSTREAM; CICs and SFOs CIC crate; hooked into.
Pulsar Status For Peter. L2 decision crate L1L1 TRACKTRACK SVTSVT CLUSTERCLUSTER PHOTONPHOTON MUONMUON Magic Bus α CPU Technical requirement: need a FAST.
Processor Memory Processor-memory bus I/O Device Bus Adapter I/O Device I/O Device Bus Adapter I/O Device I/O Device Expansion bus I/O Bus.
J. Linnemann, MSU 2/12/ L2 Status James T. Linnemann MSU PMG September 20, 2001.
1 Level 1 Pre Processor and Interface L1PPI Guido Haefeli L1 Review 14. June 2002.
Vienna Group Discussion Meeting on Luminosity CERN, 9 May 2006 Presented by Claudia-Elisabeth Wulz Luminosity.
October Test Beam DAQ. Framework sketch Only DAQs subprograms works during spills Each subprogram produces an output each spill Each dependant subprogram.
L2toTS Status and Phase-1 Plan and Pulsar S-LINK Data Format Cheng-Ju Lin Fermilab L2 Trigger Upgrade Meeting 03/12/2004.
Ba A B B1 FADC B2 SD_FP FLEX_I/O ROC VME64x A: [ HELICITY, HELICITY_FLIP ] (NIM or ECL) Port 1 Port 2 a: [ HELICITY, HELICITY_FLIP ] (LVDS) B: [ HELICITY_TRIGGER,
16th May 2007 MICE tracker phone meeting 1 Setup test bench/ Installation DATE Hideyuki Sakamoto 16 th May 2007 MICE phone meeting.
ATLAS SCT/Pixel TIM FDR/PRR28 June 2004 TIM Requirements - John Lane1 ATLAS SCT/Pixel TIM FDR/PRR 28 June 2004 Physics & Astronomy HEP Electronics John.
XTRP Software Nathan Eddy University of Illinois 2/24/00.
General Tracker Meeting: Greg Iles4 December Status of the APV Emulator (APVE) First what whyhow –Reminder of what the APVE is, why we need it and.
1 DAQ.IHEP Beijing, CAS.CHINA mail to: The Readout In BESIII DAQ Framework The BESIII DAQ system consists of the readout subsystem, the.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
AFE II Status First board under test!!.
L0 processor for NA62 Marian Krivda 1) , Cristina Lazzeroni 1) , Roman Lietava 1)2) 1) University of Birmingham, UK 2) Comenius University, Bratislava,
CONGESTION CONTROL.
Parallel Programming in C with MPI and OpenMP
Idle Setup Run Startup Process Wait Event Event Wait Answer Send
Chapter 13: I/O Systems.
Presentation transcript:

J. Linnemann, MSU 6/21/ Agenda L2 monitoring and readout Jim Linnemann 20 min Magic Bus Transceiver Dan Edmunds 20 min Plans for the GL2 Card Drew Baden 20 min Using GL2 in the Muon preprocessor Mike Fortner 20 min Discussion, draft specs for GL2 card

J. Linnemann, MSU 6/21/ L2 I/O and Monitoring James T. Linnemann Michigan State University NIU Triggering Workshop October 18, 1997

J. Linnemann, MSU 6/21/ Standard Preprocessor I/O l Inputs to processor(s) every (L1) event possibly interleaved with event processing l Any Inter-processor handshakes, data movement, needed during event processing l Output to L2 Global every event any data gathering needed (via VME?) l Output to L3 every event passed by Global any data gathering needed (maybe just by VBD) VME (or P3 VME) between processor and VBD VBD siezes VME during L3 readout

J. Linnemann, MSU 6/21/ Auxiliary I/O l Download of executable, constants l Once per Run e.g. threshold of objects to send to L2Global l Once per Event: SCL Inputs (Queued) Needed before processing: ‚Standard Qualifiers: Needed, Monitoring, Mark&Pass ‚Special Qualifiers: L1 bit-dependent hints (for speedup) Needed after Processing: ‚L2 Pass/Fail from L2 HW FW

J. Linnemann, MSU 6/21/ Monitoring I/O: Continuous and Periodic l Monitor every buffer in system by both methods l Pacman displays can come from either path l Continuous (132ns): L2FW Scaler Gates for Monitoring internal state, number of buffers occupied ‚gives %, avg time in state; histogram of buffer occupancy l every 5 sec: Monitoring (and Error Messages?) send to computer with D0IP to be served event # stamp to match event counts: In, Out, Processed, Error ‚collected after processing event with Monitoring qualifier buffer occupancy (right now) circular buffer of processing times (for each state?) ‚distribution of times to see tails

J. Linnemann, MSU 6/21/ More Exotic I/O Possibilities l Debugging: Halt/Examine/Step/Deposit via Download path, or other means? l Resetting crate (=re-download?) l Incident logging (other than SCL)? l Event dump l Test data injection helps standalone testing > 16 events? Required? Full speed? l Playback

J. Linnemann, MSU 6/21/ Summary of L2 I/O l Hardware Scaler lines (every buffer’s occupancy; state ID) l Every event: L1 input Test data injection L2 Global Output SCL: L1, L2 FW l L3 Output all passed events or just M+P? l Executable (and constants) l Cuts for run l Periodic monitoring block to server not via L3 l Event Dump, Debugger, Playback, Incident Log l Reset

J. Linnemann, MSU 6/21/ I/O Paths l Dedicated Point to Point lines l VME direct card to card Multiport to another crate Crate Interconnect l Ethernet/IP l Private Bus in crate (Mbus, PCI, P3...) l Shea/Goodwin? l [SCL and Data Cable]

J. Linnemann, MSU 6/21/ VME I/O Issues l How many functions (safely) via VME? I/O Latency during processing causing no deadtime? f T = M (f B ) 2 / N; f B = (Rate X Size/ Bus Speed) f T =fract of time budget; f B = fract of bus capacity M = I/O per event; N = # (interruptible) fragments(VBD=1) ‚contributes to fractional variability of processing time (f B max) ‚quadratic dependence--be sure in control! Drop L3 except Monitoring events (sep L1 bit) to help this? ‚L2G writing inputs to simplify engineering for preprocessors?

J. Linnemann, MSU 6/21/ L2 I/O Card Issues l MBT card: misc functions for L2 Global, L2 Cal: Mbus + possibly VME l GL2 Card: misc functions for L2 mu, L2 tracking: VME l How many functions pushed onto MBT/GL2? MBT is Mbus and possibly VME slave l How much commonality between MBT and GL2? l Can GL2 be used as L1 Cal driver?

J. Linnemann, MSU 6/21/ What happens at first beam? l Deadtime is nonzero l look at which source of deadtime dominates l look at occupancy of L2 decision buffers and L3 readout buffers l whoever has full buffers in front has to react l If it’s L2, need details of internals of system Global, or prepeprocessors? Which preprocessor? I/O or processing? Average events or special cases causing problems?

J. Linnemann, MSU 6/21/ Goals of Monitoring l Verify all events accounted for how many data faults/errors l Document rejection/pass rates l Understand (for normal operation) sources of deadtime Ÿprocessing time for each element Ÿloading of each of buffers within system Ÿinteraction of trigger setup and performance where to expend tuning effort to improve performance l Assist in debugging in case of problems

J. Linnemann, MSU 6/21/ Two kinds of monitoring l Information recorded by software counters current status information recent history l Hardware Scalers exact counts (e.g. events) integral quantities (e.g. luminosity) averages (e.g. livetime) BOTH types captured periodically and sent to monitoring subsystem in host

J. Linnemann, MSU 6/21/ L2 HW FW support l Counter with current number of events in L2 = number in FE L2 decision buffers decode into 0-15 and scale each: hist of # buffers in use l Similar scaling/decoding support available for other quantities from L2 number of events in a given buffer current state code of a processor - decode into fraction of time in each state un-encoded objects also welcome; scalers flexible

J. Linnemann, MSU 6/21/ L2 Global: Snapshots l Event counts, pass/fail by filter bit l Current number of events in L2G input buffer L2G output buffer Input Receiver buffer (for each input line) l Circular buffers of start/stop times for each state allows distributions, e.g. of processing time/event l L2 Cal captures similar information

J. Linnemann, MSU 6/21/ L2 Global: Scalers l Current State Code: Update on state change. Processing, idle, L3 Readout, etc. l Number of events in buffer: Update on event arrival, or end of event processing. L2G input, L2G output, L2G receiver (each input), worst of L2G receivers (each MBT card) l L2 Cal captures similar information

J. Linnemann, MSU 6/21/ What L2 scalers can do l Deadtime fraction l Time-in-state scalers can drive pac-man displays l = Fraction of time in “Processing” x Time / Nevts l Buffer-occupancy scalers can drive pac-man or bar-chart or calculate low buffer occupancy in front of an object means the bottleneck is elsewhere, or upstream of the buffer Find mean occupancy, or fraction time > N buffers full Color code on map of system

J. Linnemann, MSU 6/21/ L1 HW FW: Monitoring Qualifier l Every 5 sec or so, L1 HW FW sets qualifier for L1- passed event Capture L1 HW FW scalers after this event L2 Preprocessors capture scalers after this event -should do this even if event is not fully processed L2 Global captures scalers after this event Capture L2 HW FW scalers after this event Result: event-synchronous set of snaphots event counts should sum perfectly buffer counts not simultaneous (few 100  sesc) l Each passes information to Host (via TCC???)

J. Linnemann, MSU 6/21/ What L2 Preprocessors should do l Capture snapshot for every “Monitoring” event stamp with event number and report to host monitoring ‚and let that system combine pieces?) or report to TCC (on request? Via VME?) l Snapshot: Nin, Nbad, Nprocessed Noccupied for each buffer type: in, (internal,) L3 Circular buffer of processing times if time varies l Scalers: Noccupied for each buffer current state (or at least “processing/waiting” )

3.1L2G Mu em jet

Global Administrator Worker State In L3