1 PROOF The Parallel ROOT Facility Gerardo Ganis / CERN CHEP06, Computing in High Energy Physics 13 – 17 Feb 2006, Mumbai, India Bring the KB to the PB.

Slides:



Advertisements
Similar presentations
Operating System.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
June, 20013rd ROOT Workshop1 PROOF and ROOT Grid Features Fons Rademakers.
CHEP 2012 – New York City 1.  LHC Delivers bunch crossing at 40MHz  LHCb reduces the rate with a two level trigger system: ◦ First Level (L0) – Hardware.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 PROOF & GRID Update Fons Rademakers. 2 Parallel ROOT Facility The PROOF system allows: parallel execution of scripts parallel analysis of trees in a.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
June 21, PROOF - Parallel ROOT Facility Maarten Ballintijn, Rene Brun, Fons Rademakers, Gunter Roland Bring the KB to the PB.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
DIANE Overview Germán Carrera, Alfredo Solano (CNB/CSIC) EMBRACE COURSE Monday 19th of February to Friday 23th. CNB-CSIC Madrid.
PROOF: the Parallel ROOT Facility Scheduling and Load-balancing ACAT 2007 Jan Iwaszkiewicz ¹ ² Gerardo Ganis ¹ Fons Rademakers ¹ ¹ CERN PH/SFT ² University.
PROOF - Parallel ROOT Facility Kilian Schwarz Robert Manteufel Carsten Preuß GSI Bring the KB to the PB not the PB to the KB.
The ALICE Analysis Framework A.Gheata for ALICE Offline Collaboration 11/3/2008 ACAT'081A.Gheata – ALICE Analysis Framework.
AliEn uses bbFTP for the file transfers. Every FTD runs a server, and all the others FTD can connect and authenticate to it using certificates. bbFTP implements.
The Next Generation Root File Server Andrew Hanushevsky Stanford Linear Accelerator Center 27-September-2004
PROOF Status and Perspectives G. GANIS CERN / LCG VII ROOT Users workshop, CERN, March 2007.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Interactive Data Analysis with PROOF Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers CERN.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
DIANE Project CHEP 03 DIANE Distributed Analysis Environment for semi- interactive simulation and analysis in Physics Jakub T. Moscicki,
Can we use the XROOTD infrastructure in the PROOF context ? The need and functionality of a PROOF Master coordinator has been discussed during the meeting.
Evolution of Parallel Programming in HEP F. Rademakers – CERN International Workshop on Large Scale Computing VECC, Kolkata.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
1 Marek BiskupACAT2005PROO F Parallel Interactive and Batch HEP-Data Analysis with PROOF Maarten Ballintijn*, Marek Biskup**, Rene Brun**, Philippe Canal***,
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
ROOT for Data Analysis1 Intel discussion meeting CERN 5 Oct 2003 Ren é Brun CERN Distributed Data Analysis.
ROOT and Federated Data Stores What Features We Would Like Fons Rademakers CERN CC-IN2P3, Nov, 2011, Lyon, France.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
ROOT-CORE Team 1 PROOF xrootd Fons Rademakers Maarten Ballantjin Marek Biskup Derek Feichtinger (ARDA) Gerri Ganis Guenter Kickinger Andreas Peters (ARDA)
PROOF in Atlas Tier 3 model Sergey Panitkin 1 BNL.
ARDA Prototypes Andrew Maier CERN. ARDA WorkshopAndrew Maier, CERN2 Overview ARDA in a nutshell –Experiments –Middleware Experiment prototypes (basic.
Introduction to the PROOF system Ren é Brun CERN Do-Son school on Advanced Computing and GRID Technologies for Research Institute of.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Summary Distributed Data Analysis Track F. Rademakers, S. Dasu, V. Innocente CHEP06 TIFR, Mumbai.
HIGUCHI Takeo Department of Physics, Faulty of Science, University of Tokyo Representing dBASF Development Team BELLE/CHEP20001 Distributed BELLE Analysis.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
A prototype for an extended PROOF What is PROOF ? ROOT analysis model … … on a multi-tier architecture Status New development Prototype based on XRD Demo.
March, PROOF - Parallel ROOT Facility Maarten Ballintijn Bring the KB to the PB not the PB to the KB.
Super Scaling PROOF to very large clusters Maarten Ballintijn, Kris Gulbrandsen, Gunther Roland / MIT Rene Brun, Fons Rademakers / CERN Philippe Canal.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
1 Status of PROOF G. Ganis / CERN Application Area meeting, 24 May 2006.
ANALYSIS TOOLS FOR THE LHC EXPERIMENTS Dietrich Liko / CERN IT.
March 13, 2006PROOF Tutorial1 Distributed Data Analysis with PROOF Fons Rademakers Bring the KB to the PB not the PB to the KB.
September, 2002CSC PROOF - Parallel ROOT Facility Fons Rademakers Bring the KB to the PB not the PB to the KB.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
03/09/2007http://pcalimonitor.cern.ch/1 Monitoring in ALICE Costin Grigoras 03/09/2007 WLCG Meeting, CHEP.
Sept. 2000CERN School of Computing1 PROOF and ROOT Grid Features Fons Rademakers.
ROOT and PROOF Tutorial Arsen HayrapetyanMartin Vala Yerevan Physics Institute, Yerevan, Armenia; European Organization for Nuclear Research (CERN)
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
PROOF on multi-core machines G. GANIS CERN / PH-SFT for the ROOT team Workshop on Parallelization and MultiCore technologies for LHC, CERN, April 2008.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
Lyon Analysis Facility - status & evolution - Renaud Vernet.
Report PROOF session ALICE Offline FAIR Grid Workshop #1
PROOF – Parallel ROOT Facility
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
PROOF in Atlas Tier 3 model
Ákos Frohner EGEE'08 September 2008
Partner: LMU (Atlas), GSI (Alice)
G. Ganis, 2nd LCG-France Colloquium
LCG middleware and LHC experiments ARDA project
Computing Infrastructure for DAQ, DM and SC
Support for ”interactive batch”
PROOF - Parallel ROOT Facility
Presentation transcript:

1 PROOF The Parallel ROOT Facility Gerardo Ganis / CERN CHEP06, Computing in High Energy Physics 13 – 17 Feb 2006, Mumbai, India Bring the KB to the PB and not the PB to the KB

2 What is this talk about G. Ganis, CHEP06, 13 Feb 2006 End-user analysis of HEP data on distributed systems using the ROOT data model ROOT ?  Package providing:  Efficient data storage supporting structured data sets  Efficient query system to access the information  Complete set of tools for scientific analysis  Advanced 2D / 3D visualization and GUI systems  C++ interpreter HEP data ?  Collections of independent events

3 The ROOT data model: Trees & Selectors G. Ganis, CHEP06, 13 Feb 2006 Begin() Create histos, … Define output list Process() preselection analysis Terminate() Final analysis (fitting, …) output list Selector loop over events OK event branch leaf branch leaf 12 n last n read needed parts only Chain branch leaf

4 End-User Analysis scenarios G. Ganis, CHEP06, 13 Feb 2006  Analysis performance typically I/O bound  Analysis requirements of LHC experiments (~100 users)  CPU: ~ MSI2k (~ 250 Intel Core Duo)  Storage: ~ PB storage  Bandwidth: ~ 100 MB/s Intrinsic parallelism of data classically exploited by splitting long analysis jobs in smaller sub-jobs addressing different portions of data and submitted (run) concurrently Farms of few hundred nodes Good access to data required

5 “Classic” approach G. Ganis, CHEP06, 13 Feb 2006 Storage Batch farm queues manager outputs catalog query  “static” use of resources  jobs frozen: 1 job / worker node  “manual” splitting, merging  limited monitoring (end of single job) submit files jobs data file splitting myAna.C merging final analysis

6 The PROOF approach G. Ganis, CHEP06, 13 Feb 2006 catalog Storage PROOF farm scheduler query  farm perceived as extension of local PC  same macro, syntax as in local session  more dynamic use of resources  real time feedback  automated splitting and merging MASTER PROOF query: data file list, myAna.C files final outputs (merged) feedbacks (merged)

7 PROOF – Multi-tier Architecture G. Ganis, CHEP06, 13 Feb 2006 good connection ? VERY importantless important Optimize for data locality or efficient data server access adapts to cluster of clusters or wide area virtual clusters Geographically separated domains; heterogenous machine types proofd

8 PROOF: ingredients G. Ganis, CHEP06, 13 Feb 2006  PROOF servers are full-featured ROOT applications  communication layer setup via light daemons ( (x)proofd )  authentication: password-based, GSI, Kerberos, …  Dynamic load balancing  Pull architecture: workers ask for work when finished  faster workers get more work  Merging infrastructure via Merge()  implemented for standard objects: histograms, trees, …  user-defined strategy by overloading / defining Merge()  Feedback at tunable frequency:  Standard statistics histograms (events/packets per node, …)  Temporary version of any output object  Package manager for optimized upload of additional libraries needed by the analysis

9 PROOF – Scalability G. Ganis, CHEP06, 13 Feb nodes: dual Itanium II 1 GHz CPU’s, 2 GB RAM, 2x75 GB 15K SCSI disk, 1 Fast Eth, 1 GB Eth nic (not used) Each node has one copy of the data set (4 files, total of 277 MB), 32 nodes: 8.8 Gbyte in 128 files, 9 million events Efficiency ~ 90 % 8.8GB, 128 files 1 node: 325 s 32 nodes in parallel: 12 s  Case of data locality

10 PROOF: data access and scheduling issues G. Ganis, CHEP06, 13 Feb 2006  Low latency in data access is essential  Minimize file opening overhead (asynchronous open)  Caching, asynchronous read-ahead of required segments  Supported by xrootd  Scheduling of large numbers of users:  Interface to generic resource broker to optimize the load  Batch systems have this already:  concrete implementations exist for LSF, Condor  planned for BQS, Sun Grid Engine, …  On the GRID, use available services to determine the session configuration F. Furano, #368, Feb 15th, 16:20 A. Hanushevsky, #407, Feb 15th, 17:00

11 GRID G. Ganis, CHEP06, 13 Feb 2006 PROOFMASTERSERVER USER SESSION Guaranteed site access through PROOF Sub-Masters calling out to Master (agent technology) Grid/Root Authentication Grid Access Control Service TGrid UI/Queue UI Proofd Startup GRID Service Interfaces Grid File/Metadata Catalogue Client retrieves list of logical files (LFN + MSN) Slave servers access data via xrootd from local disk pools PROOF SLAVE SERVERS PROOFSUB-MASTERSERVER PROOFSUB-MASTERSERVER PROOFSUB-MASTERSERVERROOT Demo’ed by ALICE at SC03, SC04, …

12 What is our goal?

13 Typical end-user job-length distribution G. Ganis, CHEP06, 13 Feb 2006 Interactive analysis using local resources, e.g. - end-analysis calculations - visualization Analysis jobs with well defined algorithms (e.g. production of personal trees) Medium term jobs, e.g. analysis design and development using also non-local resources Goal: bring these to the same level of perception

14 Sample of analysis activity G. Ganis, CHEP06, 13 Feb 2006 AQ1: 1s query produces a local histogram AQ2: a 10mn query submitted to PROOF1 AQ3->AQ7: short queries AQ8: a 10h query submitted to PROOF2 BQ1: browse results of AQ2 BQ2: browse temporary results of AQ8 BQ3->BQ6: submit 4 10mn queries to PROOF1 CQ1: Browse results of AQ8, BQ3->BQ6 Monday at 10h15 ROOT session on my laptop Monday at 16h25 ROOT session on my laptop Wednesday at 8h40 Browse from any web browser

15 What do we have now? G. Ganis, CHEP06, 13 Feb 2006  Most of the ingredients in place  support for multi- sessions  asynchronous (non-blocking) running mode  support for disconnect / reconnect  Use xrootd as launcher of server sessions  query-results classification and management  retrieve / archive / remove  Command-line controlled via TProof API …  … but also GUI controlled Virtual Demo

16 A real PROOF session - connection G. Ganis, CHEP06, 13 Feb 2006 Predefined session Define new session Session startup progress bar Session startup status ready

17 A real PROOF session - package manager G. Ganis, CHEP06, 13 Feb 2006 Package tab PAR (Proof ARchive)  ROOT-INF directory, BUILD.sh, SETUP.C  Control setup of each worker

18 A real PROOF session - query definition and running G. Ganis, CHEP06, 13 Feb 2006 name Execute to create chain Select chain Choose selector Feedback histograms Processing information

19 A real PROOF session: query browsing and finalization G. Ganis, CHEP06, 13 Feb 2006 Details about the query Folder with output objects raw histogram finalization

20 A real PROOF session: disconnection / reconnection G. Ganis, CHEP06, 13 Feb 2006  Running sessions kept alive by server side coordinator  Reconnection is much faster: no process to fork disconnect reconnect The query is now terminated

21 A real PROOF session: chain viewer G. Ganis, CHEP06, 13 Feb 2006 Chain viewer Right-click

22 Ongoing activities / Near Future Plans G. Ganis, CHEP06, 13 Feb 2006  Data file upload manager  optimally distribute data on the cluster storage  keep track of existing data sets for optimized re-run  Dynamic cluster configuration  come-and-go functionality for worker nodes  olbd network to get info about the load on the cluster  Optimizations  packetizer (re-assignment of being-processed packets to fast idle slaves)  data access (fully exploit asynchronous features of xrootd)  Monitoring of cluster behaviour  MonAlisa: allows definition of ad hoc parameters, e.g. I/O / node / query  Improve handling of error conditions  identify cases hanging the system, improve error logging, …  exploit olbd control network for better overview of the cluster  Testing and consolidation  Documentation

23 Who’s using PROOF? G. Ganis, CHEP06, 13 Feb 2006  ALICE (see, e.g., at CHEP04; LHCC review Nov 2005)  PHOBOS  CMS analysis prototype Development test-beds  Currently using CERN phased out machines:  35 dual Pentium III 800 MHz / 512 MB RAM  100 MBit/s Ethernet  600 GB total hard disk  Contact with Lyon / CC-IN2P3 (ALICE)  up to 16 dual Xeon 2.8 GHz, 200 GB scratch each  Request for new test-bed at CERN (LCG / ALICE / CMS) M. Ballantijn, #374, Feb 15th, 16:40 I. Gonzáles, #267, Feb 14th, 16:00

24 People working on the project G. Ganis, CHEP06, 13 Feb 2006  B. Bellenot, M. Biskup, R. Brun, G. Ganis, J. Iwaszkiewicz, G. Kickinger, P. Nilsson, A. Peters, F. Rademakers (CERN)  M. Ballintijn, C. Loizides, C. Reed (MIT)  D. Feichtinger (PSI)  P. Canal (FNAL)

25 The End G. Ganis, CHEP06, 13 Feb 2006  Questions? Links   Discussion topic at the ROOT forum

26 PROOF – backup slides G. Ganis, CHEP06, 13 Feb 2006  Additional recent improvements  Stateless connection with XrdProofd  XrdProofd basics  Connection layer  Pull architecture  command-line session

27 Additional recent improvements G. Ganis, CHEP06, 13 Feb 2006  Session startup  Parallel startup with threads  Optimized sequential startup  Startup status and progress bar  New progressive packetizer: open files as needed  continuously re-estimate # of entries  can order files based on availability  Draw() and viewer functionality via PROOF

28 Interactive batch: stateless connection with XrdProofd G. Ganis, CHEP06, 13 Feb 2006  Needs coordination of alive sessions when clients disconnect  existing proofd required deep re-design  xrd (networking, work dispatching layer of xrootd)  generic framework to handle protocols  used already to launch rootd for TNetFile clients  Dedicated protocol to launch and manage PROOF sessions  forked in separated processes to protect from client bugs  talking to coordinator via UNIX socket  Disconnect / reconnect handled naturally  Asynchronous reading allows to setup a control interrupt network independent of OOB  Xrd/olbd control network used to query status information A.Hanushevsky, #407, Feb 15th, 17:00

29 XrdProofd basics G. Ganis, CHEP06, 13 Feb 2006  Prototype based on XROOTD XROOTD links XrdXrootdProtocol files MT stuff user 1 XPD links XrdProofdProtocol proofserv  XrdProofdProtocol: client gateway to proofserv  static area for all client information and its activities static area MT stuff Worker servers user 1 user 2 … …

30 … client xc slave n XrdProofd XS slave 1 XrdProofd XS master XrdProofd XS xc XRD links TXSocket xc proofserv xc fork() proofslave xc fork() proofslave xc fork() XrdProofd communication layer G. Ganis, CHEP06, 13 Feb 2006

31 Workflow: Pull Architecture G. Ganis, CHEP06, 13 Feb 2006 dynamic load balancing naturally achieved

32 AliEn: command-line session G. Ganis, CHEP06, 13 Feb 2006  TGrid : abstract interface for all services // Connect TGrid *alien = TGrid::Connect(“alien://”); // Query TString path= “/alice/cern.ch/user/p/peters/analysis/miniesd/”; TGridResult *res = alien->Query(path, ”*.root“); // Create chain from list of files TChain chain(“Events", “session“, res->GetFileInfoList()); // Open a PROOF session TProof *proof = TProof::Open(“proofmaster”); // Process your query chain.Process(“selector.C”);