GridPP10 Meeting CERN June 3 rd 2004

Slides:



Advertisements
Similar presentations
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Advertisements

BaBarGrid GridPP10 Meeting CERN June 3 rd 2004 Roger Barlow Manchester University 1: Simulation 2: Data Distribution: The SRB 3: Distributed Analysis.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
BaBarGrid: Some UK developments Roger Barlow Imperial College 13th September 2002.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
The story of BaBar: an IT perspective Roger Barlow DESY 4 th September 2002.
1 Use of the European Data Grid software in the framework of the BaBar distributed computing model T. Adye (1), R. Barlow (2), B. Bense (3), D. Boutigny.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Scientific Computing at SLAC Richard P. Mount Director: Scientific Computing and Computing Services DOE Review June 15, 2005.
EU funding for DataGrid under contract IST is gratefully acknowledged GridPP Tier-1A Centre CCLRC provides the GRIDPP collaboration (funded.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
28 April 2003Lee Lueking, PPDG Review1 BaBar and DØ Experiment Reports DOE Review of PPDG January 28-29, 2003 Lee Lueking Fermilab Computing Division D0.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
BaBar Data Distribution using the Storage Resource Broker Adil Hasan, Wilko Kroeger (SLAC Computing Services), Dominique Boutigny (LAPP), Cristina Bulfon.
Δ Storage Middleware GridPP10 What’s new since GridPP9? CERN, June 2004.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.
A B A B AR InterGrid Testbed Proposal for discussion Robin Middleton/Roger Barlow Rome: October 2001.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
BaBarGrid UK Distributed Analysis Roger Barlow Montréal collaboration meeting June 22 nd 2006.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
18/12/03PPD Christmas Lectures 2003 Grid in the Department A Guide for the Uninvolved PPD Computing Group Christmas Lecture 2003 Chris Brew.
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
BaBar-Grid Status and Prospects
Eleonora Luppi INFN and University of Ferrara - Italy
Belle II Physics Analysis Center at TIFR
Moving the LHCb Monte Carlo production system to the GRID
Tim Barrass Split ( ?) between BaBar and CMS projects.
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Understanding the nature of matter -
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
UK GridPP Tier-1/A Centre at CLRC
The INFN TIER1 Regional Centre
Universita’ di Torino and INFN – Torino
Grid Canada Testbed using HEP applications
LHC Data Analysis using a worldwide computing grid
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
DØ MC and Data Processing on the Grid
MonteCarlo production for the BaBar experiment on the Italian grid
The National Grid Service Mike Mineter NeSC-TOE
The LHCb Computing Data Challenge DC06
Presentation transcript:

GridPP10 Meeting CERN June 3 rd 2004 BaBarGrid Roger Barlow Manchester University 1:Simulation 2: Data Distribution: The SRB 3: Distributed Analysis GridPP10 Meeting CERN June 3 rd 2004

1: Grid based simulation (Fergus Wilson + Co.) Using existing UK farms (80 CPUs) Dedicated process at RAL merging output and sending to SLAC Use VDT Globus rather than LCG Why? Installation difficulty/Reliability/stability problems. VDT Globus is subset of LCG: running on LCG system perfectly possible (in principle) US groups talk of using GRID3. VDT Globus is also a subset of GRID3 – but GRID3 and LCG different. Mistake to rely on LCG features? BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 Current situation 5 Million events in official production since 7th March. Best week (so far!) 1.6 million events. Now producing at RHUL & Bristol. Manchester & Liverpool in ~2 weeks. Then QMUL & Brunel. 4 farms will produce 3-4 million a week. Sites cooperative (need to install BaBar Conditions Database which uses Objectivity) Major problem has been firewalls. Complicated interaction with all the communication and ports. Identifying the source has been hard. BaBarGrid: GridPP10, CERN June3 2004

What the others are doing Italians and Germans going full-blown LCG route Objectivity database through networked ams servers (need 1 server per ~30 processes) Otherwise assume BaBar environment available at remote hosts Our approaches will converge one day Meanwhile, they will try sending jobs to RAL, we will try sending jobs to Ferrara. BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 Future Keep production running. Test an LCG interface (RAL? Ferrara? Manchester Tier 2?) when we have the manpower. Will give more functionality and stability in the long-term. Smooth and streamline process BaBarGrid: GridPP10, CERN June3 2004

Richard P. Mount SLAC May 20, 2004 2: Data Distribution and The SRB SLAC/BaBar Richard P. Mount SLAC May 20, 2004 These slides stolen (with permission) from a PPDG talk

SLAC-BaBar Computing Fabric Client Client Client Client Client Client 1500 dual CPU Linux 900 single CPU Sun/Solaris Objectivity/DB object database + HEP-specific ROOT software (Xrootd) IP Network (Cisco) Disk Server Disk Server Disk Server Disk Server Disk Server Disk Server 120 dual/quad CPU Sun/Solaris 400 TB Sun FibreChannel RAID arrays IP Network (Cisco) HPSS + SLAC enhancements to Objectivity and ROOT server code Tape Server Tape Server Tape Server Tape Server Tape Server 25 dual CPU Sun/Solaris 40 STK 9940B 6 STK 9840A 6 STK Powderhorn over 1 PB of data BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 BaBar Tier-A Centers A component of the Fall 2000 BaBar Computing Model Offer resources at the disposal of BaBar; Each provides tens of percent of total BaBar computing/analysis need; 50% of BaBar computing investment was in Europe in 2002, 2003 CCIN2P3, Lyon, France in operation for 3+ years; RAL, UK in operation for 2+ years INFN-Padova, Italy in operation for 2 years GridKA, Karlsruhe, Germany in operation for 1 year. BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 SLAC-PPDG Grid Team Richard Mount 10% PI Bob Cowles Strategy and Security Adil Hasan 50% BaBar Data Mgmt Andy Hanushevsky 20% Xrootd, Security … Matteo Melani 80% New hire Wilko Kroeger 100% SRB data distribution Booker Bense Grid software installation Post Doc BaBar - OSG BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 Network/Grid Traffic BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 SLAC-BaBar-OSG BaBar-US has been: Very successful in deploying Grid data distribution (SRB US-Europe) Far behind BaBar-Europe in deploying Grid job execution (in production for simulation) SLAC-BaBar-OSG plan Focus on achieving massive simulation production in US within 12 months make 1000 SLAC processors part of OSG Run BaBar simulation on SLAC and non-SLAC OSG resources BaBarGrid: GridPP10, CERN June3 2004

3: Distributed Analysis At GridPP9: Good news: Basic grid job submission system deployed and working (Alibaba / Gsub) with GANGA portal Bad news: Low take up because of Users uninterested Poor reliability BaBarGrid: GridPP10, CERN June3 2004

BaBarGrid: GridPP10, CERN June3 2004 Since then… Mike Give talk at IoP parallel session Write Abstract (accepted) for All Hands meeting Write Thesis No real progress Alessandra Move to Tier 2 system manager post James Starts June 14th Attended GridPP10 meeting Roger Submit Proforma 3 Complete quarterly progress report Revise Proforma 3 Advertise and recruit replacement post Negotiate on revised Proforma 3 Write Abstract (pending) for CHEP Submit JeSRP-1 Write contribution for J Phys G Grid article Janusz Improve portal Develop web-based version BaBarGrid: GridPP10, CERN June3 2004

Future two-point plan(1) James to review/revise/relaunch job submission system Work with UK Grid/SP team (short term) and Italian/German LCG system (long term) Improve reliability through core team of users on development system BaBarGrid: GridPP10, CERN June3 2004

Future two-point plan (2) Drive Grid usage through incentive RAL CPUs very heavily loaded by BaBar. Slow turnround  stressed users Make significant CPU resources available to BaBar users only through the Grid Some of the new Tier 1/A resources All the Tier 2 (Manchester) resources And see that Grid certificate take-up grow! BaBarGrid: GridPP10, CERN June3 2004

Final Word Our problems today will be your problems tomorrow challenges challenges BaBarGrid: GridPP10, CERN June3 2004