Online Data Challenges David Lawrence, JLab Feb. 20, 2014 2/20/14Online Data Challenges.

Slides:



Advertisements
Similar presentations
Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
Advertisements

GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
The June Software Review David Lawrence, JLab Feb. 16, /16/121Preparations for June Software Review David Lawrence.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
Elliott Wolin CLAS12 Software Workshop U of Richmond 25-May-2010.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
OEP infrastructure issues Gregory Dubois-Felsmann Trigger & Online Workshop Caltech 2 December 2004.
CFT Calibration Calibration Workshop Calibration Requirements Calibration Scheme Online Calibration databases.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
The June Software Review David Lawrence, JLab Feb. 16, 2012.
L3 Filtering: status and plans D  Computing Review Meeting: 9 th May 2002 Terry Wyatt, on behalf of the L3 Algorithms group. For more details of current.
Online Update Elliott Wolin GlueX Collaboration Meeting 3-Jun-2013.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
Feb. 19, 2015 David Lawrence JLab Counting House Operations.
May. 11, 2015 David Lawrence JLab Counting House Operations.
Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.
06/15/2009CALICE TB review RPC DHCAL 1m 3 test software: daq, event building, event display, analysis and simulation Lei Xia.
Introduction to Hall-D Software February 27, 2009 David Lawrence - JLab.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
Hall A DAQ status and upgrade plans Alexandre Camsonne Hall A Jefferson Laboratory Hall A collaboration meeting June 10 th 2011.
HPS Online Software Discussion Jeremy McCormick, SLAC Status and Plans.
Workfest Goals Develop the Tools for CDR Simulations HDFast HDGEANT Geometry Definitions Remote Access Education of the rest of the collaboration Needs.
DAQ Status Graham. EMU / EB status EMU framework prototype is complete. Prototype read, process and send modules are complete. XML configuration mechanism.
Hall D Offline Software Performance and Status 12 GeV Software Review III February 10, 2015 Mark Ito Hall D Offline Software1.
DAQ Status Report GlueX Collaboration – Jan , 2009 – Jefferson Lab David Abbott (In lieu of Graham) GlueX Collaboration Meeting - Jan Jefferson.
Online Monitoring Status David Lawrence JLab Oct. 2, /2/14Monitoring Status -- David Lawrence1.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
6-10 Oct 2002GREX 2002, Pisa D. Verkindt, LAPP 1 Virgo Data Acquisition D. Verkindt, LAPP DAQ Purpose DAQ Architecture Data Acquisition examples Connection.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
Introduction Advantages/ disadvantages Code examples Speed Summary Running on the AOD Analysis Platforms 1/11/2007 Andrew Mehta.
JANA and Raw Data David Lawrence, JLab Oct. 5, 2012.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Online monitoring and filtering Graham July 2009 Graham July 2009.
Hall-D/GlueX Software Status 12 GeV Software Review III February 11[?], 2015 Mark Ito.
DEPARTEMENT DE PHYSIQUE NUCLEAIRE ET CORPUSCULAIRE JRA1 Parallel - DAQ Status, Emlyn Corrin, 8 Oct 2007 EUDET Annual Meeting, Palaiseau, Paris DAQ Status.
Oct. 8, 2015 David Lawrence JLab Counting House Operations.
The KLOE computing environment Nuclear Science Symposium Portland, Oregon, USA 20 October 2003 M. Moulson – INFN/Frascati for the KLOE Collaboration.
1 23.July 2012Jörn Adamczewski-Musch TRB / HADAQ plug-ins for DABC and Go4 Jörn Adamczewski-Musch GSI, Experiment Electronics: Data Processing group EE-meeting,
Online Reconstruction 1M.Ellis - CM th October 2008.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
1 EIR Nov 4-8, 2002 DAQ and Online WBS 1.3 S. Fuess, Fermilab P. Slattery, U. of Rochester.
The JANA Reconstruction Framework David Lawrence - JLab May 25, /25/101JANA - Lawrence - CLAS12 Software Workshop.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
Monitoring Update David Lawrence, JLab Feb. 20, /20/14Online Monitoring Update -- David Lawrence1.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
Software for the CMS Cosmic Challenge Giacomo BRUNO UCL, Louvain-la-Neuve, Belgium On behalf of the CMS Collaboration CHEP06, Mumbay, India February 16,
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
DAQ Selection Discussion DAQ Subgroup Phone Conference Christopher Crawford
October 19, 2010 David Lawrence JLab Oct. 19, 20101RootSpy -- CHEP10, Taipei -- David Lawrence, JLab Parallel Session 18: Software Engineering, Data Stores,
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
1 GlueX Software Oct. 21, 2004 D. Lawrence, JLab.
David Lawrence JLab May 11, /11/101Reconstruction Framework -- GlueX Collab. meeting -- D. Lawrence.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Online – Data Storage and Processing
WP18, High-speed data recording Krzysztof Wrona, European XFEL
CMS High Level Trigger Configuration Management
Controlling a large CPU farm using industrial tools
Trigger, DAQ, & Online: Perspectives on Electronics
ALICE Computing Upgrade Predrag Buncic
Computing Infrastructure for DAQ, DM and SC
Example of DAQ Trigger issues for the SoLID experiment
Data Acquisition.
Hall D Trigger and Data Rates
August 19th 2013 Alexandre Camsonne
Presentation transcript:

Online Data Challenges David Lawrence, JLab Feb. 20, /20/14Online Data Challenges

Online Data Challenge 2013 Primary participants: Elliott Wolin, Sean Dobbs, David Lawrence When: August 26 – 29, 2013 Where: Hall-D Counting House Objective: Test data flow and monitoring between final stage Event Builder (EB) and tape silo (i.e. neither the DAQ system nor the offline were included) 2/20/14 Online Data Challenges

Input Data Pythia-generated events simulated, smeared, and passed through L1 event filter* Events digitized and written in EVIO format – mc2coda library used to write in the new event building scheme specification provided by DAQ group – Translation table derived from Fernando’s spreadsheet detailing the wiring scheme that will be used *event filter may have used uncalibrated BCAL energy units, but resulted in roughly 36% of events being kept. 2/20/14Online Data Challenges

Computers systems (many of these on loan) *n.b. all L3 machines connected via InfiniBand 2/20/14 Online Data Challenges

L3 Infrastructure Test 10 nodes used to pass events from Event Builder (EB) to Event Recorder (ER) – EB on gluon44, ER on halldraid1 Two “pass-through” modes used: – Simple buffer copy without parsing (40kHz) – Buffer copy with parsing and application of translation table (~13kHz) 2/20/14Online Data Challenges DL3TriggerBDT algorithm from MIT – Unable to run for extended time without crashing – Cause of crashes unknown and under investigation MIT and TMVA code itself have been eliminated as causes – Total rate of ~7.2KHz

L3 Prototype Farm 2/20/14Online Data Challenges Fast Computers used for testing (gluon44, gluon45)Computers ordered for L3 infrastructure Max rate estimate for L3-BDT prototype farm: (1.6kHz)(12366/5104)(10 nodes) = ~39kHz 10 Computers ordered and will be shipped next week for L3 prototype farm

2/20/14Online Data Challenges

Monitoring System (RootSpy) test Histograms produced by several plugins were displayed via the RootSpy GUI Overlay with archive histograms RootSpy archiver (writing summed histograms to file) Integration with CODAObjects Still need to fully implement final histograms mechanism Pre-L3 Monitoring Post-L3 Monitoring 2/20/14Online Data Challenges

Other monitoring Ganglia installed and working for monitoring general health of all computer nodes 2/20/14Online Data Challenges JANA built-in monitoring available via cMsg allowing remote: – Probing for rates – Changing number of threads – Pause, resume, quit (either individually or as group)

RAID to Silo test Transfer from RAID disk to silo tested – At least 50MB/s achieved, but possibly higher – Certificate and jput set up, but we were informed later that a different mechanism should be used for experimental data from the halls – Will arrange for IT division experts to come run tests and educate us on the proper way we should be transferring to the silo 2/20/14Online Data Challenges

2/20/14Online Data Challenges

Primary Goals 2/20/14Online Data Challenges 1.Test system integration from L1 trigger to tape using low rate, cosmic trigger 2.Test system integration from ROC to tape using M.C. data at high rate 3.Test calibration event tagging3 secondary goal: Test multiple output streams *Fully installed and calibrated trigger system is not required (only 1 crate needed). *Fully installed controls system is not required (only on/off of some channels needed).

Differences from ODC2013 Data will come from crates in hall CODA-component Farm Manager New ROOTSpy features – advanced archiver functions – reference histograms High speed copy from RAID to Tape Library Faster farm CPUs ( 16 core, 2.6GHz Xeon E ) 2/20/14Online Data Challenges

Schedule 2/20/14Online Data Challenges ODC2014: May 12-16, 2014

Summary EB to ER Data flow piece tested – L3 infrastructure tested and works in pass-through mode at 40kHz (mysterious issues with L3 plugin still being tracked down) Monitoring system tested – Identical pre-L3 and post-L3 monitoring systems – RootSpy GUI used with multiple producers – RootSpy archiver RAID to Tape silo tested – Successfully transferred > 1TB from counting house to silo at >= 50MB/s – Rate seemed slower than anticipated by factor of 2, but measurement mechanism not accurate due to staging – Alternate transfer method has been advised and will be pursued 2/20/14Online Data Challenges