BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.

Slides:



Advertisements
Similar presentations
SPGrid Activities in Italy Daniele Andreotti - Enrica Antonioli - Paolo Veronesi BaBarGrid Workshop Padova, Nov 2003.
Advertisements

Your university or experiment logo here BaBar Status Report Chris Brew GridPP16 QMUL 28/06/2006.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
LNL CMS M.Biasotto, Bologna, 29 aprile LNL Analysis Farm Massimo Biasotto - LNL.
Status of BESIII Distributed Computing BESIII Workshop, Mar 2015 Xianghu Zhao On Behalf of the BESIII Distributed Computing Group.
INFN - Ferrara BaBarGrid Meeting SPGrid Efforts in Italy BaBar Collaboration Meeting - SLAC December 11, 2002 Enrica Antonioli - Paolo Veronesi.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
INFN Testbed status report L. Gaido WP6 meeting CERN - October 30th, 2002.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
CERN - IT Department CH-1211 Genève 23 Switzerland t Monitoring the ATLAS Distributed Data Management System Ricardo Rocha (CERN) on behalf.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
F.Fanzago – INFN Padova ; S.Lacaprara – LNL; D.Spiga – Universita’ Perugia M.Corvo - CERN; N.DeFilippis - Universita' Bari; A.Fanfani – Universita’ Bologna;
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES P. Saiz (IT-ES) AliEn job agents.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Nick Brook Current status Future Collaboration Plans Future UK plans.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
CERN Using the SAM framework for the CMS specific tests Andrea Sciabà System Analysis WG Meeting 15 November, 2007.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.
A GRID solution for Gravitational Waves Signal Analysis from Coalescing Binaries: preliminary algorithms and tests F. Acernese 1,2, F. Barone 2,3, R. De.
30/07/2005Symmetries and Spin - Praha 051 MonteCarlo simulations in a GRID environment for the COMPASS experiment Antonio Amoroso for the COMPASS Coll.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
BaBarGrid UK Distributed Analysis Roger Barlow Montréal collaboration meeting June 22 nd 2006.
LCG and Tier-1 Facilities Status ● LCG interoperability. ● Tier-1 facilities.. ● Observations. (Not guaranteed to be wry, witty or nonobvious.) Joseph.
INFSO-RI Enabling Grids for E-sciencE CRAB: a tool for CMS distributed analysis in grid environment Federica Fanzago INFN PADOVA.
INFN - Ferrara BaBar Meeting SPGrid: status in Ferrara Enrica Antonioli - Paolo Veronesi Ferrara, 12/02/2003.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
Distributed Analysis Tutorial Dietrich Liko. Overview  Three grid flavors in ATLAS EGEE OSG Nordugrid  Distributed Analysis Activities GANGA/LCG PANDA/OSG.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
II EGEE conference Den Haag November, ROC-CIC status in Italy
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Geant4 GRID production Sangwan Kim, Vu Trong Hieu, AD At KISTI.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Workload Management Workpackage
Regional Operations Centres Core infrastructure Centres
BaBar-Grid Status and Prospects
GridPP10 Meeting CERN June 3 rd 2004
Real Time Fake Analysis at PIC
SPGrid Status in Ferrara
Eleonora Luppi INFN and University of Ferrara - Italy
INFN-GRID Workshop Bari, October, 26, 2004
ALICE Physics Data Challenge 3
Job workflow Pre production operations:
Universita’ di Torino and INFN – Torino
Grid Canada Testbed using HEP applications
DØ MC and Data Processing on the Grid
MonteCarlo production for the BaBar experiment on the Italian grid
Presentation transcript:

BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005

15/02/2005 TB INFN-GRIDEleonora Luppi SPGrid Simulation Production (SP) in BaBar is a typical distributed effort. We decided to give SP a priority higher then to other kinds of BaBar software on Grid SP needs a lot of CPU time, a little input, standard output BaBar MonteCarlo event simulation makes use of Objectivity Database for conditions and Xrootd for trigger background

15/02/2005 TB INFN-GRIDEleonora Luppi BaBar SPGrid In Italy full grid approach

Eleonora Luppi Resources LCG middleware About 200 shared CPUs used for production tests AMS and Xrootd servers installed in Ferrara, Naples and Padua ProdTools installed on User Interface in Ferrara BaBar Resource Broker in Ferrara 15/02/205 TB INFN GRID

Eleonora Luppi Production scheme /02/205 TB INFN GRID

Eleonora Luppi Software Installation Simulation Software packaged and installed on involved sites Package distributed and tagged using LCG tools: – ManageSoftwareTool (submitted like a standard job) – Transfer packages from closest SE to WNs – Run scripts to install the simulation software – lcg-ManageVOTag Main script – Set all variables needed – Run Moose – Compress and transfer output to the SE ProdTools – spsub integrated with standard Grid submission – Output retrievied from SE to UI using a custom script – spmerge integrated in standard Grid submission (test in progress) 15/02/205 TB INFN GRID

Eleonora Luppi SP on INFN Grid Future Plans Add CNAF pool resources Add specific BaBar resources (trigger and Cdb servers) at least at CNAF site Add other Babar-Grid site Gridify BaBar resources on Italian Tier 1 Start true production on the “Grid Farm” Coordination with other national Grid sites

15/02/2005 TB INFN GRID Eleonora Luppi Future in UK Started rolling reinstalls of the UK farms with SL All BaBar UK Farm site are also GridPP Tier 2 sites New LCG installation methods means we can add LCG with minimal changes on our farms Overlap of work done between UK-SPGrid and Grid.It SP is quite small Rewrite submitter to: – Submit grid.it style jobs and “full install” jobs (or merge these so Job choose appropriate type when it runs) – Probably do it in python to make use of native edg-job-* commands – Abstract the submit function so we can use other Grids Rewrite unpacker to: – Unpack Grid.It jobs – Do it in parallel Rewrite monitoring to monitor LCG Jobs C.Brew

Eleonora Luppi SPGrid status and plans in Canada 15/02/2005 TB INFN GRID

Eleonora Luppi “Analysis” Grid “Analysis” in the following means any kind of experimental data processing Problems to face: – Large amount of data to analyze – Remote access to input data – Different kinds and sizes of output depending on the type of analysis A step by step approach can be useful to reduce the complexity of the problem 15/02/2005 TB INFN GRID

Eleonora Luppi Skim Production in the Grid – Outline Short definition: GRID = Submit a job locally and run it anywhere. Submit skim jobs from a central site (SLAC) Check job requirements and run the job at an appropriate Tier-site (or SLAC) Move job output to the pre-defined output area (~ MB/job) Import collection to be skimmed. Skim Operator Site Manager Merge will be done locally at SLAC! W. Roethel 15/02/2005 TB INFN GRID

Eleonora Luppi Site Requirements Software release has to exist. Conditions database has to exists. Input collection has to be available. Local worker nodes need to have means to move data to remote locations using WAN (e.g. using Grid Middleware). W. Roethel Similar to SP 15/02/2005 TB INFN GRID

Eleonora Luppi “Analysis” Summary Step by step approach for the analysis Skimming seems to be a good starting point We need to study data handling possibilities to obtain an efficient and simple way to handle and maintain data, also in non-BaBar sites Results of tests with grid-analysis special infrastructures (see Alien and children) tell us that they can be interesting but need more support and stability 15/02/2005 TB INFN GRID

15/02/205 TB INFN GRIDEleonora Luppi BaBar on INFN Grid Future Plans Use of CNAF pool resource for SP: BaBar quota (50 machines) and the common when available Install soon a Cdb (Objectivity) and a Background trigger (Xrootd) server at CNAF, on private BaBar resources Add other Babar-Grid sites (Torino is going to have resources for Cdb and Bckgrnd triggers) Gridify BaBar resources on Italian Tier 1, to reach asymptotically a common use of CNAF resource (but we need to maintain “traditional” account on BaBar analysis farm during 2005) Use of dedicated BaBar machines at CNAF for merging SP data before sending them to SLAC We need at least 2 TB of buffer at CNAF SE for merging SP data (BaBar disk space at CNAF is 34 TB, including IBM and SE) When ready, skimming will need at least CPU and a buffer for skimming and merging real data of about 4-5 TB.

Eleonora Luppi Conclusions SPGrid is a reality for true SP production We will converge to a common European solution soon (in the mean time we will run SP jobs using the national Grid facilities) Our next goal is to have an Analysis Grid capable of running “standard” analysis programs (i.e. Skimming) We need to give special attenction to data handling tools 15/02/205 TB INFN GRID