WPFL General Meeting, 06-07-2010, Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR.

Slides:



Advertisements
Similar presentations
Simulation of Feedback Scheduling Dan Henriksson, Anton Cervin and Karl-Erik Årzén Department of Automatic Control.
Advertisements

Supercharging PlanetLab : a high performance, Multi-Application, Overlay Network Platform Written by Jon Turner and 11 fellows. Presented by Benjamin Chervet.
Grids: Why and How (you might use them) J. Templon, NIKHEF VLV T Workshop NIKHEF 06 October 2003.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Real-Time Kernels and Operating Systems. Operating System: Software that coordinates multiple tasks in processor, including peripheral interfacing Types.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. The Power of Data Driven Triggering DAQ Topology.
Fermilab Fermilab Scientific Computing Division & Computing Enabling Technologies Department, Fermi National Accelerator Laboratory, Batavia, Illinois,
Computer Networks Switching Professor Hui Zhang
Virtual Organization Approach for Running HEP Applications in Grid Environment Łukasz Skitał 1, Łukasz Dutka 1, Renata Słota 2, Krzysztof Korcyl 3, Maciej.
Data Acquisition Backbone Core DABC J. Adamczewski, H.G. Essel, N. Kurz, S. Linev GSI, Darmstadt The new Facility for Antiproton and Ion Research at GSI.
Artdaq Introduction artdaq is a toolkit for creating the event building and filtering portions of a DAQ. A set of ready-to-use components along with hooks.
High-Level Interconnect Architectures for FPGAs An investigation into network-based interconnect systems for existing and future FPGA architectures Nick.
TELE202 Lecture 5 Packet switching in WAN 1 Lecturer Dr Z. Huang Overview ¥Last Lectures »C programming »Source: ¥This Lecture »Packet switching in Wide.
Reliable Scheduling Again – just a single hit onto a very large area.
Network Architecture for the LHCb DAQ Upgrade Guoming Liu CERN, Switzerland Upgrade DAQ Miniworkshop May 27, 2013.
EGEE is a project funded by the European Union under contract IST HEP Use Cases for Grid Computing J. A. Templon Undecided (NIKHEF) Grid Tutorial,
Clara Gaspar, March 2005 LHCb Online & the Conditions DB.
Online Reconstruction used in the Antares-Tarot alert system J ü rgen Brunner The online reconstruction concept Performance Neutrino doublets and high.
Design Criteria and Proposal for a CBM Trigger/DAQ Hardware Prototype Joachim Gläß Computer Engineering, University of Mannheim Contents –Requirements.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
Kraków4FutureDaQ Institute of Physics & Nowoczesna Elektronika P.Salabura,A.Misiak,S.Kistryn,R.Tębacz,K.Korcyl & M.Kajetanowicz Discrete event simulations.
Modeling PANDA TDAQ system Jacek Otwinowski Krzysztof Korcyl Radoslaw Trebacz Jagiellonian University - Krakow.
Interconnect Networks Basics. Generic parallel/distributed system architecture On-chip interconnects (manycore processor) Off-chip interconnects (clusters.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
1 Farm Issues L1&HLT Implementation Review Niko Neufeld, CERN-EP Tuesday, April 29 th.
LKr readout and trigger R. Fantechi 3/2/2010. The CARE structure.
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
S.Anvar, V.Gautard, H.Le Provost, F.Louis, K.Menager, Y.Moudden, B.Vallage, E.Zonca, on behalf of the KM3NeT consortium 1 IRFU/SEDI-CEA Saclay F
Monitoring for the ALICE O 2 Project 11 February 2016.
15-Aug-08DoE Site Review / Harvard(1) Front End Electronics for the NOvA Neutrino Detector John Oliver, Nathan Felt, Sarah Harder Long baseline neutrino.
Proposal for a “Switchless” Level-1 Trigger Architecture Jinyuan Wu, Mike Wang June 2004.
DAQ & ConfDB Configuration DB workshop CERN September 21 st, 2005 Artur Barczyk & Niko Neufeld.
1 Evaluation of Cooperative Web Caching with Web Polygraph Ping Du and Jaspal Subhlok Department of Computer Science University of Houston presented at.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
The Evaluation Tool for the LHCb Event Builder Network Upgrade Guoming Liu, Niko Neufeld CERN, Switzerland 18 th Real-Time Conference June 13, 2012.
Event Display The DAQ system architecture The NOA Data Acquisition system with the full 14 kt far detector P.F. Ding, D. Kalra, A. Norman and J. Tripathi.
DHH at DESY Test Beam 2016 Igor Konorov TUM Physics Department E18 19-th DEPFET workshop May Kloster Seeon Overview: DHH system overview DHE/DHC.
IRFU The ANTARES Data Acquisition System S. Anvar, F. Druillole, H. Le Provost, F. Louis, B. Vallage (CEA) ACTAR Workshop, 2008 June 10.
Eric Hazen1 Ethernet Readout With: E. Kearns, J. Raaf, S.X. Wu, others... Eric Hazen Boston University.
 Operating system.  Functions and components of OS.  Types of OS.  Process and a program.  Real time operating system (RTOS).
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
INTRODUCTION TO WIRELESS SENSOR NETWORKS
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
DAQ read out system Status Report
High Rate Event Building with Gigabit Ethernet
Front-end Electronic for a neutrino telescope : a new ASIC SCOTT
WP18, High-speed data recording Krzysztof Wrona, European XFEL
“FPGA shore station demonstrator for KM3NeT”
Modeling event building architecture for the triggerless data acquisition system for PANDA experiment at the HESR facility at FAIR/GSI Krzysztof Korcyl.
PANDA collaboration meeting FEE session
Pasquale Migliozzi INFN Napoli
White Rabbit in KM3NeT Mieke Bouwhuis NIKHEF 9th White Rabbit Workshop
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
Controlling a large CPU farm using industrial tools
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Kostas Manolopoulos Tasos Belias
Trigger, DAQ, & Online: Perspectives on Electronics
Emanuele Leonardi PADME General Meeting - LNF January 2017
Computing Infrastructure for DAQ, DM and SC
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
Characteristics of Reconfigurable Hardware
Design Review.
MB – NG SuperJANET4 Development Network
The CMS Tracking Readout and Front End Driver Testing
Presentation transcript:

WPFL General Meeting, , Nikhef A. Belias1 Shore DAQ system - report on studies A.Belias NOA-NESTOR

WPFL General Meeting, , Nikhef A. Belias2 Shore DAQ Tasks Receive all data from telescope (PMTs / Controls/ Earth & Sea-science)  ALL DATA TO SHORE Process data to extract events & calibration constants  ON-LINE DATA FILTER OF WHOLE TELESCOPE Archive events and operational conditions  EVENT, CALIBRATION, RAW DATA, DETECTOR STATUS Control and monitoring readout components  MISSION CRITICAL AND SUB-CRITICAL UNITS Update local Data Bases & export to remote sites GRID  VALIDITY CONTEXTS AND CACHE COHERENCE

WPFL General Meeting, , Nikhef A. Belias3 Data aggregation scheme ALL data are sent to shore Expected rate of Gb/s cannot be just stored Concentrate all data in temporary buffers Aggregate a time-slice of data of whole telescope Process on-line in trigger farm sequences of time- slices Time-position-amplitude correlations of PMT hits in real-time for the whole telescope Archive event data, calibration data, fraction of raw data

WPFL General Meeting, , Nikhef A. Belias4 GRID DAQ architecture Option A Optical Demux MEOC O/E BIG FPGA Many Transceivers Multiple 1Gbit Ethernet Gbit Ethernet Switch fabric 1 Gbit Ethernet Data Base 1 Gbit Ethernet Gbit Ethernet Switch fabric 1 Gbit Ethernet Data Processing Farm Node 1 Node 2 Node 3 Node N Archive Run Control & Monitor Server Run Control & Monitor Server BUFFER SYSTEM

WPFL General Meeting, , Nikhef A. Belias5 data OM n # e.g. dual port memo DP mem # 1 Mem. address counter/generator WAP (Write Address Pointer) RAP (Read Address pointer) WAP 1 WAP 2 WAP 3 WAP n Heartbeat 311 MHz Value’s for address offsets (signal propagation time measurements for each point to point connection) data OM 1 data OM 2 data OM 3 Event data from e.g. a DU Prop. Value 1Prop. Value 2Prop. Value 3Prop. Value n Mem control logic system RAP Equal for all memories Data Aggregation (Example using the NIK) Real time received data available for time sliced readout Time phase (offset) Wiki text: Round-robin (RR) is one of the simplest scheduling algorithms for processes in an operating system, which assigns time slices to each process in equal portions and in circular order, handling all processes without priority. Round-robin scheduling is both simple and easy to implement, and starvation-free. Round-robin scheduling can also be applied to other scheduling problems, such as data packet scheduling in computer networks. Dual port memory for Round Robin scheduling Determinated fixed relative time offsets Data word OM 1 Shore # 2# 3# n rel. time offset

WPFL General Meeting, , Nikhef A. Belias6 Simulation studies Study attainable performance of readout. Investigate consequences of sudden PMT rate bursts, dedicated calibration runs, overlapping time-slices and triggers. Queue modeling to determine latency issues, buffer requirements, processing power. Started with simulations of “vertical slice”. Simulations can help to defer buying h/w to get best performance at low costs.

WPFL General Meeting, , Nikhef A. Belias7 GRID DAQ architecture Option B Optical Demux MEOC O/E BIG FPGA Many Transceivers x 5 18 lines 1Gbit Ethernet Gbit Ethernet Switch fabric 1 Gbit Ethernet Data Base 1 Gbit Ethernet Data Processing Farm Node 1 Node 2 Node 3 Node N Archive Run Control & Monitor Server Run Control & Monitor Server Buffer

WPFL General Meeting, , Nikhef A. Belias8 Summary & Status The DAQ on-shore must have the flexibility to adapt and the modularity to scale. The use of FPGA systems allows for flexibility in hardware. The use of mass market, industrial, systems allows long term (10+ years) maintainability at low costs. Work on data aggregation using FPGA and the PON has started Simulations studies of system performance have started.