LHC experiments Requirements and Concepts ALICE

Slides:



Advertisements
Similar presentations
Peter Chochula CERN-ALICE ALICE DCS Workshop, CERN September 16, 2002 DCS – Frontend Monitoring and Control.
Advertisements

23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
Specification and Simulation of ALICE DAQ System Giovanna Di Marzo Serugendo.
March 2003 CHEP Online Monitoring Software Framework in the ATLAS Experiment Serguei Kolos CERN/PNPI On behalf of the ATLAS Trigger/DAQ Online Software.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
CERN - European Laboratory for Particle Physics HEP Computer Farms Frédéric Hemmer CERN Information Technology Division Physics Data processing Group.
ALICE Data Challenge V P. VANDE VYVRE – CERN/PH LCG PEB - CERN March 2004.
MSS, ALICE week, 21/9/041 A part of ALICE-DAQ for the Forward Detectors University of Athens Physics Department Annie BELOGIANNI, Paraskevi GANOTI, Filimon.
The High-Level Trigger of the ALICE Experiment Heinz Tilsner Kirchhoff-Institut für Physik Universität Heidelberg International Europhysics Conference.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
MiniBooNE Computing Description: Support MiniBooNE online and offline computing by coordinating the use of, and occasionally managing, CD resources. Participants:
The ALICE DAQ: Current Status and Future Challenges P. VANDE VYVRE CERN-EP/AID.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
6-10 Oct 2002GREX 2002, Pisa D. Verkindt, LAPP 1 Virgo Data Acquisition D. Verkindt, LAPP DAQ Purpose DAQ Architecture Data Acquisition examples Connection.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
ALICE Computing Model The ALICE raw data flow P. VANDE VYVRE – CERN/PH Computing Model WS – 09 Dec CERN.
Roberto Divià, CERN/ALICE 1 CHEP 2009, Prague, March 2009 The ALICE Online Data Storage System Roberto Divià (CERN), Ulrich Fuchs (CERN), Irina Makhlyueva.
Management of the LHCb DAQ Network Guoming Liu * †, Niko Neufeld * * CERN, Switzerland † University of Ferrara, Italy.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
LHC experimental data: From today’s Data Challenges to the promise of tomorrow B. Panzer – CERN/IT, F. Rademakers – CERN/EP, P. Vande Vyvre - CERN/EP Academic.
Management of the LHCb Online Network Based on SCADA System Guoming Liu * †, Niko Neufeld † * University of Ferrara, Italy † CERN, Geneva, Switzerland.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
The Past... DDL in ALICE DAQ The DDL project ( )  Collaboration of CERN, Wigner RCP, and Cerntech Ltd.  The major Hungarian engineering contribution.
R.Divià, CERN/ALICE Challenging the challenge Handling data in the Gigabit/s range.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
STAR Pixel Detector readout prototyping status. LBNL-IPHC-06/ LG22 Talk Outline Quick review of requirements and system design Status at last meeting.
Pierre VANDE VYVRE ALICE Online upgrade October 03, 2012 Offline Meeting, CERN.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
CODA Graham Heyes Computer Center Director Data Acquisition Support group leader.
R.Divià, CERN/ALICE 1 ALICE off-line week, CERN, 9 September 2002 DAQ-HLT software interface.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
COMPASS DAQ Upgrade I.Konorov, A.Mann, S.Paul TU Munich M.Finger, V.Jary, T.Liska Technical University Prague April PANDA DAQ/FEE WS Игорь.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
The ALICE Data-Acquisition Read-out Receiver Card C. Soós et al. (for the ALICE collaboration) LECC September 2004, Boston.
Hall D Computing Facilities Ian Bird 16 March 2001.
HTCC coffee march /03/2017 Sébastien VALAT – CERN.
Giovanna Lehmann Miotto CERN EP/DT-DI On behalf of the DAQ team
ALICE Computing Data Challenge VI
MPD Data Acquisition System: Architecture and Solutions
Use of FPGA for dataflow Filippo Costa ALICE O2 CERN
Software Overview Sonja Vrcic
Ian Bird WLCG Workshop San Francisco, 8th October 2016
WP18, High-speed data recording Krzysztof Wrona, European XFEL
evoluzione modello per Run3 LHC
PC Farms & Central Data Recording
Service Challenge 3 CERN
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
Commissioning of the ALICE HLT, TPC and PHOS systems
SPD – ALICE DAQ/ECS/Trigger Integration Status Report
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Emanuele Leonardi PADME General Meeting - LNF January 2017
ProtoDUNE SP DAQ assumptions, interfaces & constraints
Toward a costing model What next? Technology decision n Schedule
ALICE Data Challenges On the way to 1 GB/s
PCI BASED READ-OUT RECEIVER CARD IN THE ALICE DAQ SYSTEM
The LHCb Event Building Strategy
ITS combined test seen from DAQ and ECS F.Carena, J-C.Marin
The LHCb Run Control System
Example of DAQ Trigger issues for the SoLID experiment
Event Building With Smart NICs
Commissioning of the ALICE-PHOS trigger
ALICE Data Challenges Fons Rademakers Click to add notes.
Throttling: Infrastructure, Dead Time, Monitoring
The CMS Tracking Readout and Front End Driver Testing
The Performance and Scalability of the back-end DAQ sub-system
Implementation of DHLT Monitoring Tool for ALICE
Presentation transcript:

LHC experiments Requirements and Concepts ALICE

LEP in 1989...

… and in 2000

Outline ALICE general description Requirements Architecture Software Data Challenges

Two running modes Dr Jekyll… … and Mr Hyde pp beam Pb-Pb collisions general-purpose heavy ion experiment … and Mr Hyde pp beam large cross-section pp processes

ALICE data rates 1 month (106 s) 1 Minimum Bias 20 1 - 87 Central 67 - 87 Dielectrons 200 Dimuon 670 0.7 - 2.4 24.5 2.5 1.25 NA 0.5 0.1 500 2 Pb-Pb run pp run Event rate (Hz) Event size (MB) Data in DAQ (GB/s) Data in EB (GB/s) Data on tape (GB/s) 10 months Run period Total on tape (PB) Trigger type

The original architecture

Detector Data Link Functions: main interface with the detectors handle detector-to-LDC data flow handle LDC-to-detector commands & data Keywords: cheap small functional rad-hard long distance optical used everywhere

Local Data Concentrator Functions: handle and control the local DDL(s) format the data perform local event building allow monitoring functions ship events to the event builders (GDCs) Keywords: distributed good data moving capabilities from the DDL to the Event Building Link CPU power not indispensable Not a farm

Global Data Collector Functions: accept the data sent from the LDCs perform final event building ship the events to the Permanent Data Storage (PDS) Keywords: distributed good data moving capabilities from the LDCs to the PDS CPU power not indispensable farm

Event Destination Manager Functions: collect availability information from the GDCs distribute event distribution policies to the data sources Keywords: optimized network usage look ahead capabilities

Event Building Link Functions: Move data from the LDCs to the GDCs Keywords: big events (1-3, 67-87 MB) low rates (20, 500, 670 Hz) many-to-many mainly unidirectional

Overall key concepts Keep forward flow of data Allow back-pressure at all levels (DDL, EBL, STL) Standard Hw and Sw solutions sought: ALICE collaboration CERN computing infrastructure Whenever possible go COTS During the pp run, keep any unused hardware busy

Mismatch of rates Recent introduction of: Transition Radiation Detector (TRD) Dielectron trigger change in Pixel event size increase in estimated TPC average occupancy Required throughput an order of magnitude too high! New scenarios: region-of-interest readout online compression online reconstruction introduction of a level 3 trigger

The new architecture

The Event Building process Events flow asynchronously into the LDCs Each LDC performs - if needed - local event building The Level 3 farm - if present - is notified Level 3 decision - if any - is sent to LDCs and GDC All data sources decide where to send the data according to: directives from the Event Destination Manager the content of the event The chosen GDC receives: sub-events optional reconstructed and compressed data optional level 3 decision The Event Building Link does the rest

Software environment DATE Data acquisition environment for ALICE and test beams Support DDLs, LDCs, GDCs and liaison to the PDS Standalone and complex DAQ systems Integrated with HPSS and CASTOR (via CDR) Keywords: C TCP/IP Tcl/Tk Java ROOT

Data challenges Use state-of-the-art equipment for real-life exercise 1998-1999: Challenge I 7 days @ 14 MB/s, 7 TB 1999-2000: Challenge II 2 * 7 days @ max 100 MB/s, > 20 TB transfer simulated TPC data 23 LDCs * 20 GDCs (AIX/Solaris/Linux) with offline filtering algorithms and online objectifier (ROOT) two different MSS (HPSS and CASTOR) several problems  limited stability

Data Challenge II

Event building network Pure Linux setup 20 data sources FastEthernet local connection GigaBit Ethernet backbone

Run log

Data Challenge III Will run during the winter 2000-2001 shutdown Target: 100 MB/s (or more) sustained over [7..10] days Improved stability More “ALICE like” setup abandon older architectures still in use at the test beams Implement 10% of the planned ALICE EB throughput Integrate new modules & prototypes: improved event building Level 3 Regional Centers Will use the LHC computing testbed Better status reporting tools: use PEM if available