Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.

Slides:



Advertisements
Similar presentations
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
Advertisements

Remigius K Mommsen Fermilab A New Event Builder for CMS Run II A New Event Builder for CMS Run II on behalf of the CMS DAQ group.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
12 GeV Trigger Workshop Session II - DAQ System July 8th, 2009 – Christopher Newport Univ. David Abbott.
LHCb readout infrastructure NA62 TDAQ WG Meeting April 1 st, 2009 Niko Neufeld, PH/LBC.
Elliott Wolin PCaPAC 2008 Ljubljana, Slovenia. Outline 1. Introduction 2. What is Publish/Subscribe Messaging 3. What is cMsg a) Client view b) Developer.
CERN/IT/DB Multi-PB Distributed Databases Jamie Shiers IT Division, DB Group, CERN, Geneva, Switzerland February 2001.
Plans for EPICS in Hall D at Jefferson Lab Elliott Wolin EPICS Collaboration Meeting Vancouver, BC 30-Apr-2009.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Offline Software Status Jan. 30, 2009 David Lawrence JLab 1.
May. 11, 2015 David Lawrence JLab Counting House Operations.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
External and internal data traffic in Tier-2 ATLAS farms. Sketch of farm organization Some approximate estimate s of internal and external data flows in.
Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges.
Architecture and Dataflow Overview LHCb Data-Flow Review September 2001 Beat Jost Cern / EP.
Readout & Controls Update DAQ: Baseline Architecture DCS: Architecture (first round) August 23, 2001 Klaus Honscheid, OSU.
Hall A DAQ status and upgrade plans Alexandre Camsonne Hall A Jefferson Laboratory Hall A collaboration meeting June 10 th 2011.
DAQ Status Graham. EMU / EB status EMU framework prototype is complete. Prototype read, process and send modules are complete. XML configuration mechanism.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
Independent front-end read out subsystems for 17 detectors in three underground sites. All electronics modules of each subsystem are reside in one VME.
DAQ Status Report GlueX Collaboration – Jan , 2009 – Jefferson Lab David Abbott (In lieu of Graham) GlueX Collaboration Meeting - Jan Jefferson.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
David Abbott CODA3 - DAQ and Electronics Development for the 12 GeV Upgrade.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
VLVNT Amsterdam 2003 – J. Panman1 DAQ: comparison with an LHC experiment J. Panman CERN VLVNT workshop 7 Oct 2003 Use as example CMS (slides taken from.
1 Trigger and DAQ for SoLID SIDIS Programs Yi Qiang Jefferson Lab for SoLID-SIDIS Collaboration Meeting 3/25/2011.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
Background Physicist in Particle Physics. Data Acquisition and Triggering systems. Specialising in Embedded and Real-Time Software. Since 2000 Project.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
CKM Data Source System CKM Electronics Group July 2003.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
Hall D Electronics Review (July 23-24) Elton Smith Hall D Collaboration Meeting August 4-6.
Online monitoring and filtering Graham July 2009 Graham July 2009.
David Abbott - Jefferson Lab DAQ group Data Acquisition Development at JLAB.
NUMI Off Axis NUMI Off Axis Workshop Workshop Argonne Meeting Electronics for RPCs Gary Drake, Charlie Nelson Apr. 25, 2003 p. 1.
12GeV Trigger Workshop Christopher Newport University 8 July 2009 R. Chris Cuevas Welcome! Workshop goals: 1.Review  Trigger requirements  Present hardware.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
LHCb DAQ system LHCb SFC review Nov. 26 th 2004 Niko Neufeld, CERN.
MOLLER DAQ Aug 2015 meeting Team : R. Michaels, P. M. King, M. Gericke, K. Kumar R. Michaels, MOLLER Meeting, Aug, 2015.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
SoLiD/PVDIS DAQ Alexandre Camsonne. DAQ limitations Electronics Data transfer.
Jefferson Laboratory Hall A SuperBigBite Spectrometer Data Acquisition System Alexandre Camsonne APS DNP 2013 October 24 th 2013 Hall A Jefferson Laboratory.
Niko Neufeld, CERN/PH. Online data filtering and processing (quasi-) realtime data reduction for high-rate detectors High bandwidth networking for data.
Data Acquisition, Trigger and Control
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
HPS TDAQ Review Sergey Boyarinov, Ben Raydo JLAB June 18, 2014.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
Monitoring for the ALICE O 2 Project 11 February 2016.
Introduction to DAQ Architecture Niko Neufeld CERN / IPHE Lausanne.
ROM. ROM functionalities. ROM boards has to provide data format conversion. – Event fragments, from the FE electronics, enter the ROM as serial data stream;
October 21, 2010 David Lawrence JLab Oct. 21, 20101RootSpy -- CHEP10, Taipei -- David Lawrence, JLab Parallel Session 53: Software Engineering, Data Stores,
Online Software November 10, 2009 Infrastructure Overview Luciano Orsini, Roland Moser Invited Talk at SuperB ETD-Online Status Review.
Super BigBite DAQ & Trigger Jens-Ole Hansen Hall A Collaboration Meeting 16 December 2009.
CMS DAQ project at Fermilab
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ETD/Online Report D. Breton, U. Marconi, S. Luitz
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Data Acquisition (DAQ) Status
The LHCb Event Building Strategy
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
Hall D Trigger and Data Rates
LHCb Trigger, Online and related Electronics
The Trigger Control System of the CMS Level-1 Trigger
Presentation transcript:

Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003

Outline 1. Rates from Design Report 2. Comparison with LHC,CLAS… 3. Additional Considerations 4. DAQ Challenges

1. Rates from Design Report High trigger rate – 200 KHz High trigger rate – 200 KHz Deadtimeless, pipelined front ends Deadtimeless, pipelined front ends Small event size – 5 KB Small event size – 5 KB Small Level 1 rejection rate – factor of 2 Small Level 1 rejection rate – factor of 2 Modest rate off detector – 1 GB/sec Modest rate off detector – 1 GB/sec Modest Level 3 rejection – factor of 10 Modest Level 3 rejection – factor of 10 Modest cpu needed in Level 3 – 0.1 SPECint Modest cpu needed in Level 3 – 0.1 SPECint High rate to tape – 100 MB/sec High rate to tape – 100 MB/sec

1. Rates, con’t

2. Comparison with LHC, CLAS… Compared to LHC, Hall D has: Compared to LHC, Hall D has: Similar (LHCb, BTev) or higher trigger rate Similar (LHCb, BTev) or higher trigger rate Much smaller events Much smaller events Much smaller rate off detector Much smaller rate off detector Much smaller total trigger rejection Much smaller total trigger rejection Similar rate to tape Similar rate to tape Less cpu/evt needed in Level 3 Less cpu/evt needed in Level 3

2. Comparison with LHC, CLAS… Compared to CLAS, Hall D has: Compared to CLAS, Hall D has: Much higher trigger rate Much higher trigger rate 200 KHz vs 3 KHz 200 KHz vs 3 KHz Same size events Same size events Approximately the same number channels Approximately the same number channels Much higher rate off detector Much higher rate off detector 1 GB/s vs 25 MB/s 1 GB/s vs 25 MB/s Factor 10 Level 3 rejection Factor 10 Level 3 rejection CLAS has no Level 3 CLAS has no Level 3 Factor 4 higher rate to tape Factor 4 higher rate to tape 100 MB/s vs 25 MB/s 100 MB/s vs 25 MB/s

KTeV Hall D KTev CLAS

BTev CMS Atlas Hall D KTev, CDF, DO, BaBar, CLAS

3. Additional Considerations Can not interrupt ROC every event (200 KHz) Can not interrupt ROC every event (200 KHz) Event blocking in front end cpu’s Event blocking in front end cpu’s Timing and trigger distribution Timing and trigger distribution Note that CLAS has: Note that CLAS has: 25 crates 25 crates 1 Trigger supervisor 1 Trigger supervisor 1 Event Builder and 1 Event Recorder 1 Event Builder and 1 Event Recorder No Level 3 farm No Level 3 farm

Hall D DAQ Baseline Architecture front-end crates Gigabit switch 200 KHz 200 Level 3 Filter Nodes 8 event builders 4 event recorders 4 tape drives 4 Gigabit switches Network connection to silo 20 KHz

3. Additional Considerations, con’t Crates vs networked front end boards? Crates vs networked front end boards? If crates used, VME vs CPCI vs ? If crates used, VME vs CPCI vs ? (RT)Linux vs VXWorks in front end cpu’s? (RT)Linux vs VXWorks in front end cpu’s? Need low-latency interrupt in front end cpu’s? Need low-latency interrupt in front end cpu’s? Location of electronics, crates? Location of electronics, crates? Grounding design? Grounding design?

4. DAQ Challenges All problems solved somewhere, many in CLAS All problems solved somewhere, many in CLAS But new to JLab/CODA: But new to JLab/CODA: Timing distribution Timing distribution Event blocking Event blocking Many more front end crates Many more front end crates Multiple event builders/recorders Multiple event builders/recorders Large Level 3 farm Large Level 3 farm Multiple, simultaneous DAQ systems (for commissioning) Multiple, simultaneous DAQ systems (for commissioning) Need for fault tolerance Need for fault tolerance Integration with control system Integration with control system How are we going to do it? How are we going to do it? See next talk… See next talk…

Backup slides

3. Comparison, con’t EventSize L1 Input Rate L1 output Rate L2 output Rate L3 output Rate KTev 8 KB 100 KHz 800 MB/s 20 KHz 160 MB/s 2 KHz 7 MB/s CDF 270 KB 50 KHz 13 GB/s 300Hz 80 MB/s 80 Hz 23 MB/s D0 250 KB 10 KHz 2.5 GB/s 1 KHz 250 MB/s 70 Hz 13 MB/s BaBar 33 KB (1200 L1) 2 KHz 2.4 GB/s None (65 MB/s) 100 Hz 4 MB/s BTev KB 800 GB/s 80 KHz 8 GB/s 4 KHz 200 MB/s

3. Comparison, con’t EventSize L1 Input Rate L1 output Rate L2 output Rate L3 output Rate Atlas 1-2 MB 75 KHz 100 GB/s 3 KHz 5 GB/s 200 Hz 300 MB/s CMS 1 MB 100 KHz 100 GB/s 100 Hz 100 MB/s CLAS 6 KB 4 KHz 4KHz 25 MB/s 4KHz Hall D 5 KB 400 KHz 200 KHz 1 GB/s none 20 KHz 100 MB/s