Download presentation
Presentation is loading. Please wait.
Published byLee King Modified over 8 years ago
1
renaud.vernet@cc.in2p3.fr Lyon Analysis Facility - status & evolution - Renaud Vernet
2
01/10/2016 renaud.vernet@cc.in2p3.fr 2 What is PROOF? Parallel ROOT Facility Designed to be an alternative to GRID for data analysis CPU power can be large + results are obtained quickly Code is submitted by the master and balances the load between the slaves (available CPUs)... maste r nodes cluster disks (2.5TB) PROOF (this is our current setup)... xrootd xrootd SE (17TB )
3
01/10/2016 renaud.vernet@cc.in2p3.fr 3 PROOF and LAF Pros: ROOT = analysis software common to all HEP experiments (C++ based) Few requirements on OS and system Experiment-specific libraries are compiled on proof using PAR files (tar archives) ⇒ can be shared between several experiments/VOs Priorities can be defined Cons: TSelector Main contraint is on analysis code design : code must inherit from ROOT's TSelector Not fully “interactive” … but monitoring possible on each worker xrootd Can communicate with only 1 datafile manager : xrootd Our current setup all nodes = Intel(R) Xeon(R) CPU / 8 cores [2.66GHz] / 16GB RAM / SL4 1 master (ccapl0001) + 20 slaves (ccapl00[02- 21]) = 160 workers SE (xrootd) : 17TB Identification to PROOF based on certificate
4
01/10/2016 renaud.vernet@cc.in2p3.fr 4 Live demo...
5
01/10/2016 renaud.vernet@cc.in2p3.fr 5 A few results (with ALICE software) Projection X Saturation a 1800 events/s for typically 8-10 workers Data rate ~ 130 MB/s ~ bandwidth between data and workers → only limiting factor ? No other effect behind ? → answer will be given with a larger bandwith
6
01/10/2016 renaud.vernet@cc.in2p3.fr 6 Situation for experiments : ALICE PROOF already part of the computing model Extensively tested and used for several years Analysis tasks belonging to AliRoot run on both GRID and PROOF Unique issue : Direct access to data on GRID and staging AliEn services must run on Solaris, which is not the case yet Work ongoing, hopefully completed soon Other experiments may face that problem too (?)
7
01/10/2016 renaud.vernet@cc.in2p3.fr 7 Situation for experiments : ATLAS Contact person: M. Airaj @ CEA saclay Tests successful Library compilation OK Access to data OK Analysis code run on 500k events stored on the xrootdSE # cpus Processing time (s)Data rate (MB/s)
8
01/10/2016 renaud.vernet@cc.in2p3.fr 8 Situation for experiments : CMS Contact persons: M. Bluj, A. Kalinowski @ LLR No real tests performed yet Not all CMS libraries can be loaded Probable mismatch between ROOT CMS flavour and bare ROOT used by PROOF … under investigation
9
01/10/2016 renaud.vernet@cc.in2p3.fr 9 Evaluation of different storage possibilities... maste r worker s... Read data from “External” storage : dedicated xrootd SE “Internal” storage : hard disks of the cluster nodes Xrootd 1 Gb/s PROOF xrootd SE
10
01/10/2016 renaud.vernet@cc.in2p3.fr 10 Performance benchmark (1 node only) Analysis on cluster disks : Worker reads data on another worker's disk Analysis on cluster disks : Worker reads data on its own disk Analysis on Xrootd storage : Worker reads data on SE → more performant getting data from external SE (ethernet) → if local disks are use to store data (CAF@CERN) they must be changed
11
01/10/2016 renaud.vernet@cc.in2p3.fr 11 Summary LAF is working... Successful tests from ATLAS, ALICE CMS feedback necessary … but can & must be improved bandwidth between SE and nodes interface with GRID services (data staging) Tests from several simultaneous users needed → stress the system ! results will help defining future improvements Possibility of using SSD disks for fast access (cache) can that be supported by PROOF/xrootd ? to be investigated No constraint on design, open to all good solutions
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.