Presentation is loading. Please wait.

Presentation is loading. Please wait.

ALICE O2 2014 | Pierre Vande Vyvre O 2 Project :Upgrade of the ALICE online and offline computing Meeting with Warsaw University of Technology Pierre VANDE.

Similar presentations


Presentation on theme: "ALICE O2 2014 | Pierre Vande Vyvre O 2 Project :Upgrade of the ALICE online and offline computing Meeting with Warsaw University of Technology Pierre VANDE."— Presentation transcript:

1 ALICE O2 2014 | Pierre Vande Vyvre O 2 Project :Upgrade of the ALICE online and offline computing Meeting with Warsaw University of Technology Pierre VANDE VYVRE for the O 2 project 1

2 ALICE O2 2014 | Pierre Vande Vyvre ALICE experiment at CERN 2 Detector: 18 technologies Size: 16 x 26 meters Weight: 10,000 tons Detector: 18 technologies Size: 16 x 26 meters Weight: 10,000 tons Collaboration: > 1000 Members > 100 Institutes > 30 countries Collaboration: > 1000 Members > 100 Institutes > 30 countries

3 ALICE O2 2014 | Pierre Vande Vyvre Data deluge 3 Beam crossing at 40 MHz 600 million potential collisions/second 100-10’000 “interesting” events/second 10-500 million measurement channels 1-50 Mbytes of data after first level of data compression Severe selection is mandatory to keep the computing costs to a reasonable level Beam crossing at 40 MHz 600 million potential collisions/second 100-10’000 “interesting” events/second 10-500 million measurement channels 1-50 Mbytes of data after first level of data compression Severe selection is mandatory to keep the computing costs to a reasonable level

4 ALICE O2 2014 | Pierre Vande Vyvre Data volume reduction by event selection 4 Multi-level trigger system (40 MHz → a few kHz) Reject background Select most interesting interactions Custom computer to reduce the total data volume Multi-level trigger system (40 MHz → a few kHz) Reject background Select most interesting interactions Custom computer to reduce the total data volume

5 ALICE O2 2014 | Pierre Vande Vyvre Data Volume 5 Beam Type Recording (Mass Storage) (Events/s) (MB/s) Data Archived Total/Yr (PB) ALICE Pb-Pb20012502.3 ATLAS pp100 6.0 CMS pp100 3.0 LHCb pp20040 1.0

6 ALICE O2 2014 | Pierre Vande Vyvre Online and Offline Computing 6 Particles Signal Data Information Knowledge Online computing  Inspect all interactions.  Reduce the total data volume by selecting the most interesting ones.  Store the related data and make them available for offline processing. Online computing  Inspect all interactions.  Reduce the total data volume by selecting the most interesting ones.  Store the related data and make them available for offline processing. Offline computing  Process all the data.  Reduce the total data volume by extracting the information (particle properties) from raw data.  Store the results and make them available for analysis.  Produce knowledge by analyzing the information Offline computing  Process all the data.  Reduce the total data volume by extracting the information (particle properties) from raw data.  Store the results and make them available for analysis.  Produce knowledge by analyzing the information

7 ALICE O2 2014 | Pierre Vande Vyvre Upgrade of the ALICE experiment Now: reducing the event rate from 40 MHz to ~1 kHz –Select the most interesting particle interactions –Reduce the data volume to a manageable size After 2018: –Much more data (x100) because Higher interaction rate More violent collisions → More particles → More data (1 TB/s) Physics topics require measurements characterized by very small signal-over- background ratio → large statistics Large background → traditional triggering or filtering techniques very inefficient for most physics channels. Read out all particle interactions (PbPb) at the anticipated interaction rate of 50~kHz. –No more data selection Continuous detector read-out → Data less structured than in the past Read-out and process all interactions with a standard computer farm ~1’500 nodes with the computing power expected by then Total data throughput out of the detectors: 1 TB/s 7

8 ALICE O2 2014 | Pierre Vande Vyvre Requirements Focus of ALICE upgrade on physics probes requiring high statistics: sample 10 nb -1 Online System Requirements Sample full 50kHz Pb-Pb interaction rate current limit at ~500Hz, factor 100 increase system to scale up to 100 kHz TPC drift time >> minimum bias rate Continuous (trigger-less) read-out  ~1.1 TByte/s detector readout However: Storage bandwidth limited to a much lower value (design decision/cost) Many physics probes have low S/B: classical trigger/event filter approach not efficient 8

9 ALICE O2 2014 | Pierre Vande Vyvre O 2 System from the Letter of Intent Design Guidelines Handle >1 TByte/s detector input Produce (timely) physics result Minimize “risk” for physics results ➪ Allow for reconstruction with improved calibration, e.g. store clusters associated to tracks instead of tracks ➪ Minimize dependence on initial calibration accuracy Keep cost “reasonable” ➪ Limit storage system bandwidth to ~80 GB/s peak and 20 GByte/s average ➪ Optimal usage of compute nodes 9 Online Reconstruction to reduce data volume Output of System AODs

10 ALICE O2 2014 | Pierre Vande Vyvre O 2 Project Requirements 10 -Handle >1 TByte/s detector input -Support for continuous read-out -Online reconstruction to reduce data volume -Common hw and sw system developed by the DAQ, HLT, Offline teams -Handle >1 TByte/s detector input -Support for continuous read-out -Online reconstruction to reduce data volume -Common hw and sw system developed by the DAQ, HLT, Offline teams

11 ALICE O2 2014 | Pierre Vande Vyvre O 2 Project Institutes 11 Institutes 1.California Institute of Technology, Pasadena, US 2.Creighton University, Omaha, US 3.FIAS, Frankfurt, Germany 4.GSI, Darmstadt, Germany 5.IIT, Mumbay, India 6.INFN, Bologna, Italy 7.IPNO, Orsay, France 8.IRI, Frankfurt, Germany 9.Jammu University, Jammu, India 10.KISTI, Daejeon, Korea 11.KMUTT (King Mongkut's University of Technology Thonburi), Bangkok, Thailand 12.Laboratoire de Physique Corpusculaire (LPC), Clermont, France 13.Laboratoire de Physique Subatomique et de Cosmologie (LPSC), Grenoble, France 14.Lawrence Berkeley National Lab., US 15.LIPI, Bandung, Indonesia 16.Nuclear Physics Institute, Academy of Sciences of the Czech Republic 17.Oak Ridge National Laboratory, US 18.Rudjer Bošković Institute, Zagreb, Croatia 19.SUBATECH, Ecole des Mines, Nantes, France 20.SUP, Sao Paulo, Brasil 21.Thammasat University, Bangkok, Thailand 22.Technical University of Split FESB 23.University of Cape Town, South Africa 24.University of Houston, US 25.University of Technology, Warsaw, Poland 26.University of Tennessee, US 27.University of Texas, US 28.Wayne State University, US 29.Wigner Institute, Budapest, Hungary 30.CERN, Geneva, Switzerland Active interest from –JINR, Dubna, Russia –KTO Karatay University, Turkey –NRC KI, Kurchatov, Russia –University of Talca, Chile

12 ALICE O2 2014 | Pierre Vande Vyvre Hardware Architecture 12 2 x 10 or 40 Gb/s FLP 10 Gb/s FLP ITS TRD Muon FTP L0 L1 FLP EMC FLP TPC FLP TOF FLP PHO Trigger Detectors ~ 2500 DDL3s in total ~ 250 FLPs First Level Processors EPN Data Storage EPN Storage Network Storage Network Farm Network Farm Network 10 Gb/s ~ 1250 EPNs Event Processing Nodes

13 ALICE O2 2014 | Pierre Vande Vyvre Architecture 13 O 2 Architecture and data flow TPC Data 1 TB/s 250 GB/s 125 GB/s 80 GB/s

14 ALICE O2 2014 | Pierre Vande Vyvre O 2 Project PLs: P. Buncic, T. Kollegger, P. Vande Vyvre Computing Working Group(CWG)Chair 1.ArchitectureS. Chapeland 2.Tools & ProceduresA. Telesca 3.DataflowT. Breitner 4.Data ModelA. Gheata 5.Computing PlatformsM. Kretz 6.CalibrationC. Zampolli 7.Reconstruction R. Shahoyan 8.Physics SimulationA. Morsch 9.QA, DQM, VisualizationB. von Haller 10.Control, Configuration, MonitoringV. Chibante 11.Software LifecycleA. Grigoras 12.HardwareH. Engel 13.Software frameworkP. Hristov Editorial Committee L. Betev, P. Buncic, S. Chapeland, F. Cliff, P. Hristov, T. Kollegger, M. Krzewicki, K. Read, J. Thaeder, B. von Haller, P. Vande Vyvre Physics requirement chapter: Andrea Dainese 14 Project Organization O 2 Technical Design Report O 2 CWGs

15 ALICE O2 2014 | Pierre Vande Vyvre Design strategy Iterative process: design, benchmark, model, prototype 15 Design Model Prototype Technology benchmarks

16 ALICE O2 2014 | Pierre Vande Vyvre Software Framework 16 Clusters Tracking PID ESD AOD ESD AOD Analysis Simulation Multi-platforms Multi-applications Public-domain software

17 ALICE O2 2014 | Pierre Vande Vyvre Future steps -A new computing system (O 2 ) should be ready for the ALICE upgrade during the LHC LS2 (currently scheduled in 2018-19). -The ALICE O 2 R&D effort has started in 2013 and is progressing well -Design, modelling, benchmarks, prototype -Development of a new software framework -More people needed on various topics: -Firmware development -Software development (framework and algorithms) -Benchmarking of new technologies -Modeling -Schedule -June ‘15 : submission of TDR, finalize the project funding -‘16 – ’17: technology choices and software development -June ’18 – June ’20: installation and commissioning 17


Download ppt "ALICE O2 2014 | Pierre Vande Vyvre O 2 Project :Upgrade of the ALICE online and offline computing Meeting with Warsaw University of Technology Pierre VANDE."

Similar presentations


Ads by Google