The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.

Slides:



Advertisements
Similar presentations
High Level Trigger (HLT) for ALICE Bergen Frankfurt Heidelberg Oslo.
Advertisements

DAQ for the TPC Sector Test at Test Beam T10 ALICE DAQ Group ALICE TPC Collaboration Meeting Cagliari, Sardinia 16 – 17 May 2004.
André Augustinus ALICE Detector Control System  ALICE DCS is responsible for safe, stable and efficient operation of the experiment  Central monitoring.
June 19, 2002 A Software Skeleton for the Full Front-End Crate Test at BNL Goal: to provide a working data acquisition (DAQ) system for the coming full.
HLT - data compression vs event rejection. Assumptions Need for an online rudimentary event reconstruction for monitoring Detector readout rate (i.e.
High Level Trigger – Applications Open Charm physics Quarkonium spectroscopy Dielectrons Dimuons Jets.
1P. Vande Vyvre - CERN/PH ALICE DAQ Technical Design Report DAQ TDR Task Force Tome ANTICICFranco CARENA Wisla CARENA Ozgur COBANOGLU Ervin DENESRoberto.
Timm M. Steinbeck - Kirchhoff Institute of Physics - University Heidelberg - DPG 2005 – HK New Test Results for the ALICE High Level Trigger.
Specification and Simulation of ALICE DAQ System Giovanna Di Marzo Serugendo.
Sept TPC readoutupgade meeting, Budapest1 DAQ for new TPC readout Ervin Dénes, Zoltán Fodor KFKI, Research Institute for Particle and Nuclear Physics.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
DDL hardware, DATE training1 Detector Data Link (DDL) DDL hardware Csaba SOOS.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
14 Sep 2005DAQ - Paul Dauncey1 Tech Board: DAQ/Online Status Paul Dauncey Imperial College London.
Status and plans for online installation LHCb Installation Review April, 12 th 2005 Niko Neufeld for the LHCb Online team.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Data Acquisition for the 12 GeV Upgrade CODA 3. The good news…  There is a group dedicated to development and support of data acquisition at Jefferson.
The ALICE Data-Acquisition Software Framework DATE V5 F. Carena, W. Carena, S. Chapeland, R. Divià, I. Makhlyueva, J-C. Marin, K. Schossmaier, C. Soós,
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Data Acquisition Backbone Core J. Adamczewski-Musch, N. Kurz, S. Linev GSI, Experiment Electronics, Data processing group.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Online Software 8-July-98 Commissioning Working Group DØ Workshop S. Fuess Objective: Define for you, the customers of the Online system, the products.
VLVnT09A. Belias1 The on-shore DAQ system for a deep-sea neutrino telescope A.Belias NOA-NESTOR.
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
June 17th, 2002Gustaaf Brooijmans - All Experimenter's Meeting 1 DØ DAQ Status June 17th, 2002 S. Snyder (BNL), D. Chapin, M. Clements, D. Cutts, S. Mattingly.
PCI B ASED R EAD-OUT R ECEIVER C ARD IN THE ALICE DAQ S YSTEM W.Carena 1, P.Csato 2, E.Denes 2, R.Divia 1, K.Schossmaier 1, C. Soos 1, J.Sulyan 2, A.Vascotto.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
R.Divià, CERN/ALICE Challenging the challenge Handling data in the Gigabit/s range.
1 Electronics Status Trigger and DAQ run successfully in RUN2006 for the first time Trigger communication to DRS boards via trigger bus Trigger firmware.
Filippo Costa ALICE DAQ ALICE DAQ future detector readout October 29, 2012 CERN.
ALICE Online Upgrade P. VANDE VYVRE – CERN/PH ALICE meeting in Budapest – March 2012.
ARCHITECTURE. PRR November x 32 PADs Up to 26 or 3 x 17 MANU BOARD. PATCH BUS Translator Board. FEE DETECTOR Up to 100 PATCH BUS per detector. MANU.
A. KlugeFeb 18, 2015 CRU form factor discussion & HLT FPGA processor part II A.Kluge, Feb 18,
Management of the LHCb DAQ Network Guoming Liu *†, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
P. Vande Vyvre – CERN/PH for the ALICE collaboration CHEP – October 2010.
Fermilab Scientific Computing Division Fermi National Accelerator Laboratory, Batavia, Illinois, USA. Off-the-Shelf Hardware and Software DAQ Performance.
András László KFKI Research Institute for Particle and Nuclear Physics New Read-out System of the NA61 Experiment at CERN SPS Zimányi Winter School ‑ 25.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Gu Minhao, DAQ group Experimental Center of IHEP February 2011
ALICE Computing Data Challenge VI
MPD Data Acquisition System: Architecture and Solutions
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
LHCb and InfiniBand on FPGA
WP18, High-speed data recording Krzysztof Wrona, European XFEL
PC Farms & Central Data Recording
LHC experiments Requirements and Concepts ALICE
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
TPC Commissioning: DAQ, ECS aspects
ALICE – First paper.
ProtoDUNE SP DAQ assumptions, interfaces & constraints
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
ITS combined test seen from DAQ and ECS F.Carena, J-C.Marin
Example of DAQ Trigger issues for the SoLID experiment
ALICE Data Challenges Fons Rademakers Click to add notes.
Implementation of DHLT Monitoring Tool for ALICE
Presentation transcript:

The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb ALICE DAQ The original and updated requirements –The original requirements –The updated requirements: higher multiplicity, addition of detector Future challenges –The Region-Of-Interest readout –Online filtering –Enhanced data compression –The new architecture Current prototyping status –The ALICE DATE –Data transfer, Sub-event building and event building –Mass Storage System and Permanent Data Storage –The ALICE Data Challenge

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb ALICE DAQ The original and updated requirements –The original requirements –The updated requirements: higher multiplicity, addition of detector Future challenges –The Region-Of-Interest readout –Online filtering –Enhanced data compression –The new architecture Current prototyping status –The ALICE DATE –Data transfer, Sub-event building and event building –Mass Storage System and Permanent Data Storage –The ALICE Data Challenge

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Original requirements: event size

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Updated requirements: event size Higher multiplicity: increased TPC event size Transition Radiation Detector (TRD) added to ALICE Higher multiplicity: increased TPC event size Transition Radiation Detector (TRD) added to ALICE

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Original requirements: data throughput Conservative data compression to reduce the data throughput to 1.25 GBytes/s.

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Updated requirements: data throughput Conservative data compression and event rate reduction insufficient The TRD allows new types of online processing Conservative data compression and event rate reduction insufficient The TRD allows new types of online processing

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb ALICE DAQ The original and updated requirements –The original requirements –The updated requirements: higher multiplicity, addition of detector Future challenges –The Region-Of-Interest readout –Online filtering –Enhanced data compression –The new architecture Current prototyping status –The ALICE DATE –Data transfer, Sub-event building and event building –Mass Storage System and Permanent Data Storage –The ALICE Data Challenge

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Future Challenges 1 For dielectron events –Region-Of-Interest identified by the TRD –Could be used for –Region-Of-Interest readout Electron tracks in the TPC and TRD detectors Target: Reduce the event size from 80 to 5 MBytes –Online Filtering Refine dielectron L1 trigger by a software filter Target: Reduce the event rate from 200 to 20 Hz Requires limited CPU power. Current estimate done with STAR data: 40 kCU Physics simulation and DAQ prototyping are starting

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Future Challenges 2 For central and min. bias events Enhanced data compression for the TPC data Data compressed by applying to the raw data the following conversion: –Clusters finder –Local tracking –Raw data converted into: Parameters of a local track model Distances of the raw data clusters with the local track model Requires massive CPU power. Current estimate done with STAR data: 400 kCU

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Updated requirements: data throughput Partial readout for dielectron triggers Online filtering Partial readout for dielectron triggers Online filtering

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Architecture upgrade

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb ALICE DAQ The original and updated requirements –The original requirements –The updated requirements: higher multiplicity, addition of detector Future challenges –The Region-Of-Interest readout –Online filtering –Enhanced data compression –The new architecture Current prototyping status –The ALICE DATE –Data transfer, Sub-event building and event building –Mass Storage System and Permanent Data Storage –The ALICE Data Challenge

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb The ALICE DATE DATE: Data Acquisition and Test Environment Software framework for the ALICE DAQ development & prototyping Cover multiple needs with one common DAQ system –Need for a system to develop the DAQ –Need for a system for detector tests (lab and test beams) –Need for a framework to develop readout and monitoring programs ALICE DATE –Data flow: multiple LDCs, multiple GDCs –Run control, error reporting, bookeeping –Common software interfaces for readout, online monitoring with ROOT –Independent from physical layers: LDC I/O bus, event building network, GDC machine Used by ALICE test beams, NA57 and COMPASS

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Data transfer Front-end electronics Detector Data Links DDL SIU DDL DIU RORC Source Interface Unit Destination Interface Unit Read Out Receiver Card Local Data Concentrator Front- End Digital Crate/Computer P2 Cavern P2 Access shaft Optical Fibre 200 meters LDC

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Detector Data Link (DDL) One-to-one data communication link (FEE and DAQ) Main Requirements –Common interface between detector front-end electronics and DAQ Single hardware & software to develop and maintain Define interface soon to allow all the teams to work in parallel –Raw data transfer to DAQ –Data blocks download to FEE –Cover the distance from the detector in the cavern to the ALICE computing room in the access shaft (200 m.) Implementation –Optical link –Off-The-Shelf components Gbit/s opto-electronic –Prototypes integrated with DATE. –Tests with detectors will start this year

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb DDL SIU prototype

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb DDL DIU prototype

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb RORC prototype

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Sub-event building Many-to-one data collection inside a crate or a computer –Collect data from several data sources over computer I/O bus –Assemble these data as one sub-event from a fraction of detector –Can work as a standalone DAQ system Data sources –Current data sources are electronics cards in VME or Camac –Current sub-event building done in software by a DATE program (Readout) running in on the processor in a VME board –In the future: data sources will be DDL links –First RORC prototypes done in VME form-factor (1 VME board) –Second RORC prototype will be in PCI form-factor (1 PC adapter) –Following closely the industry evolution (PCI, PCIX, NGIO, FIO, SIO)

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Event building Event building network was initially a specially demanding application Today’s dominant trends in computing and networking industry: –Internet is the strongest incentive for always higher bandwidth –Commodity computing and networking is driving the industry –Switches replace shared media –Ethernet is the standard LAN media, TCP/IP is the standard protocol –Ethernet’s successors have the advantage of the existing installed base Event building network is similar to the backbone of a site like CERN: Ports: on Eth10, 2000 sur Eth 100, 30 on Eth 1000, Switches: 100 Eth100 or Eth1000, central bw 60 Gbps Work focus: can we use standard LAN media and protocol and how ?

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Mass Storage System Network MSS core server MSS Meta data ALICE DAQ Tape Arrays Disk Arrays Main Data Servers GDC Secondary Data Servers NFS Servers DFS Servers Client System Client System

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Mass Storage System –Isolate the DAQ and computing architecture from the problems of physical data recording, CDR, volume handling etc the technology evolution in the storage area (magnetic/optical, robotic etc) –Provide a logical structure to the storage infrastructure For example a file system: /hpss/alice/2005/pbpb_run/run00001.raw /hpss/alice/2005/pbpb_run/run00002.raw... –The MSS currently used by ALICE is HPSS but HPSS expensive and supported on a limited set of platforms, but MSS market is small –Other systems used in future prototypes: CASTOR, EUROSTORE

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Permanent Data Storage Multiple parallel streams of magnetic tapes –By LHC startup: Standard drive should achieve MBytes/s Standard capacity should be GBytes 40 drives with 80 (dis)mounts/hour in total Current CERN installation –Drive bandwidth 10 MBytes/s –Tape capacity 50 GBytes –45 drives –6 silos of 6000 cartridges of 50 GB: 1.8 PBytes capacity Feasible but expensive solution. Ratio disk storage cost/tape storage cost decreasing rapidly By LHC time, online disk storage and offline archiving could be cost effective

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Prototyping Development and prototyping progressing in parallel Prototyping with the ALICE Data Challenge (ADC) –Combined activity of the ALICE DAQ/ALICE Offline/IT teams –ADC1: 6 days at 14 MB/s (7 TB ROOT dataset) ADC2: Large DATE system –Data sources (18 LDCs) 9 Motorola VME + 2 IBM WS (Test beam area - Hall 887 on Prevessin site) 7 Motorola VME + DDL prototypes (DAQ Lab - Bld 53 on Meyrin site) –Network: Fast Ethernet switches, gigabit ethernet backbone –Data destinations ( computing center) 20 PC/Linux for event building, ROOT I/O formatting, L3 filter Central data recording (Target 100 MB/s)

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb ALICE Data Challenge II

P. Vande Vyvre CERN/EP The ALICE DAQ : Current Status and Future Challenges 07-Feb Conclusion The requirements of the ALICE DAQ have evolved a lot New ways to reduce the huge data volume will be investigated –Region-Of-Interest readout –Online filtering –Enhanced data compression scheme Development progressing (almost according to schedule) The prototypes are tested during the ALICE Data Challenges Future milestones: –Integration of DDL with detectors –ALICE Data Challenge II: from DDL to 100MB/s Computing and communication technology evolution positive Area of concerns: Storage cost and Mass Storage System