BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.

Slides:



Advertisements
Similar presentations
BaBarGrid GridPP10 Meeting CERN June 3 rd 2004 Roger Barlow Manchester University 1: Simulation 2: Data Distribution: The SRB 3: Distributed Analysis.
Advertisements

Your university or experiment logo here BaBar Status Report Chris Brew GridPP16 QMUL 28/06/2006.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
GridPP News NeSC opening “Media” dissemination Tier 1/A hardware Web pages Collaboration meetings Nick Brook University of Bristol.
EasyGrid: the job submission system that works! James Cunha Werner GridPP18 Meeting – University of Glasgow.
Grid in action: from EasyGrid to LCG testbed and gridification techniques. James Cunha Werner University of Manchester Christmas Meeting
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
The story of BaBar: an IT perspective Roger Barlow DESY 4 th September 2002.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
SC4 Workshop Outline (Strong overlap with POW!) 1.Get data rates at all Tier1s up to MoU Values Recent re-run shows the way! (More on next slides…) 2.Re-deploy.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
EasyGrid Job Submission System and Gridification Techniques James Cunha Werner Christmas Meeting University of Manchester.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
March 6, 2009Tofigh Azemoon1 Real-time Data Access Monitoring in Distributed, Multi Petabyte Systems Tofigh Azemoon Jacek Becla Andrew Hanushevsky Massimiliano.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Southgrid Technical Meeting Pete Gronbech: 26 th August 2005 Oxford.
WLCG GDB, CERN, 10th December 2008 Latchezar Betev (ALICE-Offline) and Patricia Méndez Lorenzo (WLCG-IT/GS) 1.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
GridPP Presentation to AstroGrid 13 December 2001 Steve Lloyd Queen Mary University of London.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
EScience and Particle Physics Roger Barlow eScience showcase May 1 st 2007.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
A B A B AR InterGrid Testbed Proposal for discussion Robin Middleton/Roger Barlow Rome: October 2001.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
Andrew McNab - Manchester HEP - 17 September 2002 UK Testbed Deployment Aim of this talk is to the answer the questions: –“How much of the Testbed has.
11th November 2002Tim Adye1 Distributed Analysis in the BaBar Experiment Tim Adye Particle Physics Department Rutherford Appleton Laboratory University.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Outline: Status: Report after one month of Plans for the future (Preparing Summer -Fall 2003) (CNAF): Update A. Sidoti, INFN Pisa and.
LCG Report from GDB John Gordon, STFC-RAL MB meeting February24 th, 2009.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
BaBarGrid UK Distributed Analysis Roger Barlow Montréal collaboration meeting June 22 nd 2006.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
LCG Service Challenges SC2 Goals Jamie Shiers, CERN-IT-GD 24 February 2005.
Scientific Computing in PPD and other odds and ends Chris Brew.
1 Update at RAL and in the Quattor community Ian Collier - RAL Tier1 HEPiX FAll 2010, Cornell.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
The GridPP DIRAC project DIRAC for non-LHC communities.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Joe Foster 1 Two questions about datasets: –How do you find datasets with the processes, cuts, conditions you need for your analysis? –How do.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
Grid development at University of Manchester Hardware architecture: - 1 Computer Element and 10 Work nodes Software architecture: - EasyGrid to submit.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
BaBar-Grid Status and Prospects
GridPP10 Meeting CERN June 3 rd 2004
Eleonora Luppi INFN and University of Ferrara - Italy
LCG Service Challenge: Planning and Milestones
EasyGrid: a job submission system for distributed analysis using grid
Universita’ di Torino and INFN – Torino
MonteCarlo production for the BaBar experiment on the Italian grid
Kanga Tim Adye Rutherford Appleton Laboratory Computing Plenary
The LHCb Computing Data Challenge DC06
Presentation transcript:

BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007

What we’re doing Monte Carlo Skimming Data Analysis

History 2002 Pioneering BaBarGrid demonstrator BaBar analysis software set up at RAL ‘Tier A’ centre. Successful displacement of physics analysis off-site. Common fund rebate to PPARC 2007  BaBarGrid still not in general use  PPARC renege on MoU disc/CPU allocation, RAL lose Tier A status. PPARC loses rebate

BaBar Tier A news… IN2P3: The commitments of CCIN2P3 for 2007 (150 TB and 1500 CPU units) and 2008 (200 TB and 1500 CPU units) are confirmed. For both years, the CPUs will be there at the end of June and all workers will be made available to users during the summer; the disks will be available from mid-July and will need a couple of months to be fully deployed. We foresee four shutdowns, about one day long each, per year; they will be announced well in advance. For 2007, the dates are March 20, June 12, September 18 and December 4. SL4: driven by LHC. GridKa: the situation for GridKa hasn't change: 27 TB of disk and 100 SLAC units of CPU in 07 and 08. Hardware for 2007 is already in place, installed and currently running the burn-in tests. CPUs 2007 will be delivered on April 1st, disk 2007 has to be configured and should be made available during April as well. Concerning 2008, the current milestone is again April. SL4: new CPUs already running SL4; other CPUs will be upgraded from SL3 when gLite is shown to work properly with SL4. RAL: no new investment at RAL Tier A for babar. Non-LHC nominally get 5-10% of the overall computing resources (dominated by the LHC MOU) but currently going through a budget crisis. SL4: will be driven by CERN and LHC; Tier 2s likely to follow RAL's lead. INFN: Padova has bought its 07 hardware, some already delivered. CNAF disk installed; CNAF cpu will be installed after their shutdown which should be in May (subject to sign-off on safety aspects by fire department etc...). For 08, no formal decision. Funding will no longer be direct to CNAF but via experimental budgets. In this case, BaBar Italy can either pay from their budget to install hardware in Italy or pay the common fund to install at SLAC. SL4: Padova is a babar-only site so can change when we need; CNAF will follow LHC.

Are we downhearted? No! Reasons to be cheerful 1) Tier 2 centre at Manchester with 2000 CPUs, 500 TB. With a fair share of this we can really do things 2) Release 22 of BaBar software is now out. Root based conditions database installed – last use of Objectivity finally removed.

Monte Carlo (SP) Tarball made of all programs and files Runs at Manchester and RAL as production system >500 Million events generated and processed and sent to SLAC Will extend to more sites now Objectivity is not required

Skimming BaBar Analysis model AllEvents 66 TB Skims 220 different Skims (and growing) For different analysis selections Some pointer skims Some deep copies

Skimming details Major computing load – CPU and I/O Skimming 100K events takes ~10 hours and there are ~10 9 events in AllEvents BaBar looking for resources outside SLAC Skim process uses TaskManager software (written and gridified by Will Roethel) Test at RAL Tier 2 centre: production at Manchester Tier 2 (Chris Brew, Giuliano Castelli, Dave Bailey)

Skimming details Set up 2TB xrootd server. Import data from SLAC (slow: ~10 MBit – but we’re working on it) Submit skim jobs to Tier 2 using Grid Moving data between server and farm is fast (~Gbit) Skim files (~1Gbyte/job) sent to RAL for merging. (Will do at Manchester in due course.) System running successfully. Going into production

EasyGrid: the job submission system that works! James Cunha Werner GridPP18 Meeting – University of Glasgow

Several benchmarks with BaBar experiment data : Data Gridification: –Particle identification: –Neutral pion decays: –Search for anti deuteron: Functional gridification: –Evolutionary neutral pion discriminate function: Documentation (main web page): html files and 327 complementary files 60 CPUs production and 10 CPUs development farms running independently without any problem between November/2005 and September /2006. Available since GridPP11 - September/2004:

Date: Thu, 22 Dec :51: From: Roger Barlow To: Subject: [BABAR-USERS] Manchester babar Dear Manchester BaBarians, 2 bits of good news. 1) easyroot works. I have carefully idiot-proofed it, and if I can make it work then anyone can. Today it gives access to a small farm, meaning you can run several jobs in parallel and speed up your tauuser analysis by an order of magnitude. Soon we will enable the rest of the existing BaBar farm. And before long we have the 1000 node Dell farm. For brief instructions see For full instructions see 2) we have a new big disk, thanks to Sabah. 1.6 TB. We need to decide what to put on it (and what to call it.) Father Christmas has been busy... Roger Over a year ago

η mesons in  decays Source Dr Marta Tavera

Physics Analysis on the Tier 2 Copied ntuples for a complete analysis to dCache Run ROOT jobs using minimal afs/gsiklog/vanilla globus system Struggling with dCache problems –Stress testing our dCache exposes weak points –dCache files distributed over ~1000 nodes. Inevitably, some nodes fail. dCache catalogue doesn’t know this. Jobs die Progress is slow but positive. Will run standard BaBar analysis (BetaApp) on data collections as next step

Outlook GridSP: In production. Will extend to more sites GridSkimming: Ready to go EasyGrid: Works. For users needs farms with BaBar data BaBar Data at Manchester Tier 2 –dCache being tested –xrootd now possible –Plan to try slashgrid soon ntuples today, full data files tomorrow

And finally See you in Manchester for OGF20/EGEE and EPS conference has ‘Detectors and Data Handling’ session Now open for registration and abstract submission