CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.

Slides:



Advertisements
Similar presentations
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Advertisements

Chapter 1: Introduction
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
VC Sept 2005Jean-Sébastien Graulich Report on DAQ Workshop Jean-Sebastien Graulich, Univ. Genève o Introduction o Monitoring and Control o Detector DAQ.
ISOC Peer Review - March 2, 2004 Section GLAST Large Area Telescope ISOC Peer Review Test Bed Terry Schalk GLAST Flight Software
1/16/2008CSCI 315 Operating Systems Design1 Introduction Notice: The slides for this lecture have been largely based on those accompanying the textbook.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems.
Data Acquisition Graham Heyes May 20th Outline Online organization - who is responsible for what. Resources - staffing, support group budget and.
Chapter 4 Threads, SMP, and Microkernels Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E.
Computing Hardware Starter.
K. Honscheid RT-2003 The BTeV Data Acquisition System RT-2003 May 22, 2002 Klaus Honscheid, OSU  The BTeV Challenge  The Project  Readout and Controls.
Hall D Trigger and Data Rates Elliott Wolin Hall D Electronics Review Jefferson Lab 23-Jul-2003.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
1 5/25/2016 操作系统课件 教材: 《操作系统概念(第六版 影印版)》 【原书名】 Operating System Concepts(Sixth Edition) [ 原书信息 ] Operating System Concepts(Sixth Edition) [ 原书信息 ] 【原出版社】
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
A TCP/IP transport layer for the DAQ of the CMS Experiment Miklos Kozlovszky for the CMS TriDAS collaboration CERN European Organization for Nuclear Research.
Introduction to Interactive Media Interactive Media Tools: Software.
Chapter 4 Threads, SMP, and Microkernels Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and Design.
Boosting Event Building Performance Using Infiniband FDR for CMS Upgrade Andrew Forrest – CERN (PH/CMD) Technology and Instrumentation in Particle Physics.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
 H.M.BILAL Operating System Concepts.  What is an Operating System?  Mainframe Systems  Desktop Systems  Multiprocessor Systems  Distributed Systems.
LECC2003 AmsterdamMatthias Müller A RobIn Prototype for a PCI-Bus based Atlas Readout-System B. Gorini, M. Joos, J. Petersen (CERN, Geneva) A. Kugel, R.
DAQ Status Report GlueX Collaboration – Jan , 2009 – Jefferson Lab David Abbott (In lieu of Graham) GlueX Collaboration Meeting - Jan Jefferson.
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
David Abbott CODA3 - DAQ and Electronics Development for the 12 GeV Upgrade.
DAQ Issues for the 12 GeV Upgrade CODA 3. A Modest Proposal…  Replace aging technologies  Run Control  Tcl-Based DAQ components  mSQL  Hall D Requirements.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
David Abbott - JLAB DAQ group Embedded-Linux Readout Controllers (Hardware Evaluation)
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
1 Network Performance Optimisation and Load Balancing Wulf Thannhaeuser.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
CKM Data Source System CKM Electronics Group July 2003.
Online monitoring and filtering Graham July 2009 Graham July 2009.
David Abbott - Jefferson Lab DAQ group Data Acquisition Development at JLAB.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
12GeV Trigger Workshop Christopher Newport University 8 July 2009 R. Chris Cuevas Welcome! Workshop goals: 1.Review  Trigger requirements  Present hardware.
IPHC - DRS Gilles CLAUS 04/04/20061/20 EUDET JRA1 Meeting, April 2006 MAPS Test & DAQ Strasbourg OUTLINE Summary of MimoStar 2 Workshop CCMOS DAQ Status.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
LNL 1 SADIRC2000 Resoconto 2000 e Richieste LNL per il 2001 L. Berti 30% M. Biasotto 100% M. Gulmini 50% G. Maron 50% N. Toniolo 30% Le percentuali sono.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
6.894: Distributed Operating System Engineering Lecturers: Frans Kaashoek Robert Morris
DoE Review January 1998 Online System WBS 1.5  One-page review  Accomplishments  System description  Progress  Status  Goals Outline Stu Fuess.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
XLV INTERNATIONAL WINTER MEETING ON NUCLEAR PHYSICS Tiago Pérez II Physikalisches Institut For the PANDA collaboration FPGA Compute node for the PANDA.
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
DAQ Status & Plans GlueX Collaboration Meeting – Feb 21-23, 2013 Jefferson Lab Bryan Moffit/David Abbott.
DAQ interface + implications for the electronics Niko Neufeld LHCb Electronics Upgrade June 10 th, 2010.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
DAQ Overview + selected Topics Beat Jost Cern EP.
Event Management. EMU Graham Heyes April Overview Background Requirements Solution Status.
OPERATING SYSTEM REVIEW. System Software The programs that control and maintain the operation of the computer and its devices The two parts of system.
Super BigBite DAQ & Trigger Jens-Ole Hansen Hall A Collaboration Meeting 16 December 2009.
Event Display The DAQ system architecture The NOA Data Acquisition system with the full 14 kt far detector P.F. Ding, D. Kalra, A. Norman and J. Tripathi.
CHEP 2010, October 2010, Taipei, Taiwan 1 18 th International Conference on Computing in High Energy and Nuclear Physics This research project has.
Chapter 1: Introduction What is an Operating System? Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered System Real.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
Ingredients 24 x 1Gbit port switch with 2 x 10 Gbit uplinks  KCHF
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
CMS DAQ Event Builder Based on Gigabit Ethernet
J.M. Landgraf, M.J. LeVine, A. Ljubicic, Jr., M.W. Schulz
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
Computer Science I CSC 135.
Example of DAQ Trigger issues for the SoLID experiment
Hall D Trigger and Data Rates
Presentation transcript:

CODA Graham Heyes Computer Center Director Data Acquisition Support group leader

About CODA Common DAQ system at Jlab Designed to be: –Extensible. –Modular. –High speed. –Tailored for Nuclear Physics DAQ. Supported by JLab staff. Design driven by Jlab requirements.

GlueX requirements Rates: –Event rate at L1 ~ 200KHz –Event size at L1 ~ 5 Kbytes ~ 1GByte/S off the detector. –~ 15 to 20 KHz after L3 ~ 100MBy/S to storage.

CODA ~2003 Current front-end hardware –Motorola PPC processors. –Hall B ~ 20 Readout Controllers in parallel. – Highest event rate ~ 10KHz –Highest data rate ~ 20MByte/S to disk. Event Builder –Switched Gigabit ethernet. –UNIX (solaris or Linux) Multithreaded on SMP hardware.

Limitations Hardware interrupt or polling limits event rate. Depends on CPU speed and architecture. Network protocol efficiency limits event transfer rate. Throughput of event builder and speed of disk drives limits throughput to disk.

Solutions Trigger rate too high for per-event readout –Read blocks of events Requires hardware tagging with event identifier (number). 1KHz block rate for 100 events per block is similar to current rates in halls A/B/C. Data rate too high for single data link or CPU. –Parallel readout at front end. > 20 readout controllers can handle 1 Gbyte/S from detector. –Parallel event builder. –Number of event builders depends on technology: > 10 EB for current 1GBit/S networks. –Parallel streams in online L3 farm.

Readout Controller Read blocks from detector at ~ 1KHz. Reformat data blocks into events. Arrange events into blocks for transmission. Transmit event blocks to Event Builders. –Use ET system to transfer data. –ET handles load balance and fault tolerance.

Control and Monitoring Current control system for CODA is slow and hard to maintain. New control system, written in Java for portability is being written. New system will incorporate a “slow control” system. –New system will integrate with EPICS. –Easier to use than EPICS. –Designed for rapid design and prototyping. –Use industry recognized communication standards. –Will allow CODA to “compete” with lab view and other integrated DAQ software.

Summary Rates can be handled by existing technology. Task gets easier and requires less hardware as technology advances. Problem areas: –Storage is not advancing as rapidly as required. –L3 farm needs to be there close to startup.