BaBarGrid GridPP10 Meeting CERN June 3 rd 2004 Roger Barlow Manchester University 1: Simulation 2: Data Distribution: The SRB 3: Distributed Analysis.

Slides:



Advertisements
Similar presentations
An open source approach for grids Bob Jones CERN EU DataGrid Project Deputy Project Leader EU EGEE Designated Technical Director
Advertisements

S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
S.L.LloydGrid Steering Committee 8 March 2002 Slide 1 Status of GridPP Overview Financial Summary Recruitment Status EU DataGrid UK Grid Status GridPP.
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Your university or experiment logo here BaBar Status Report Chris Brew GridPP16 QMUL 28/06/2006.
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
S.L.LloydGridPP Collaboration Meeting IC Sept 2002Slide 1 Introduction Welcome to the 5 th GridPP Collaboration Meeting Steve Lloyd, Chair of GridPP.
UK Agency for the support of: High Energy Physics - the nature of matter and mass Particle Astrophysics - laws from natural phenomena Astronomy - the.
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
13th November 2002Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting University of Bristol 13 th November.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft LCG-POB, , Reinhard Maschuw1 Grid Computing Centre Karlsruhe - GridKa Regional/Tier.
Service Data Challenge Meeting, Karlsruhe, Dec 2, 2004 Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Plans and outlook at GridKa Forschungszentrum.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Torsten Antoni – LCG Operations Workshop, CERN 02-04/11/04 Global Grid User Support - GGUS -
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Introduction to Computer Administration Introduction.
Cloud Storage in Czech Republic Czech national Cloud Storage and Data Repository project.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Helmut Dres, Institute For Scientific Computing – GDB Meeting Global Grid User Support.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
The story of BaBar: an IT perspective Roger Barlow DESY 4 th September 2002.
1 Use of the European Data Grid software in the framework of the BaBar distributed computing model T. Adye (1), R. Barlow (2), B. Bense (3), D. Boutigny.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
CHEP – Mumbai, February 2006 The LCG Service Challenges Focus on SC3 Re-run; Outlook for 2006 Jamie Shiers, LCG Service Manager.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
28 April 2003Lee Lueking, PPDG Review1 BaBar and DØ Experiment Reports DOE Review of PPDG January 28-29, 2003 Lee Lueking Fermilab Computing Division D0.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
BaBar Data Distribution using the Storage Resource Broker Adil Hasan, Wilko Kroeger (SLAC Computing Services), Dominique Boutigny (LAPP), Cristina Bulfon.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
London Tier 2 Status Report GridPP 11, Liverpool, 15 September 2004 Ben Waugh on behalf of Owen Maroney.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
BaBarGrid UK Distributed Analysis Roger Barlow Montréal collaboration meeting June 22 nd 2006.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
GDB meeting - Lyon - 16/03/05 An example of data management in a Tier A/1 Jean-Yves Nief.
J Jensen/J Gordon RAL Storage Storage at RAL Service Challenge Meeting 27 Jan 2005.
11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
18/12/03PPD Christmas Lectures 2003 Grid in the Department A Guide for the Uninvolved PPD Computing Group Christmas Lecture 2003 Chris Brew.
BaBar-Grid Status and Prospects
GridPP10 Meeting CERN June 3 rd 2004
Eleonora Luppi INFN and University of Ferrara - Italy
Belle II Physics Analysis Center at TIFR
Tim Barrass Split ( ?) between BaBar and CMS projects.
Understanding the nature of matter -
UK GridPP Tier-1/A Centre at CLRC
Universita’ di Torino and INFN – Torino
MonteCarlo production for the BaBar experiment on the Italian grid
Presentation transcript:

BaBarGrid GridPP10 Meeting CERN June 3 rd 2004 Roger Barlow Manchester University 1: Simulation 2: Data Distribution: The SRB 3: Distributed Analysis

BaBarGrid: GridPP10, CERN June3 2004Slide 2 / 16 1: Grid based simulation (Fergus Wilson + Co.) Using existing UK farms (80 CPUs) Dedicated process at RAL merging output and sending to SLAC Use VDT Globus rather than LCG –Why? Installation difficulty/Reliability/stability problems. –VDT Globus is subset of LCG: running on LCG system perfectly possible (in principle) –US groups talk of using GRID3. VDT Globus is also a subset of GRID3 – but GRID3 and LCG different. Mistake to rely on LCG features?

BaBarGrid: GridPP10, CERN June3 2004Slide 3 / 16 Current situation 5 Million events in official production since 7th March. Best week (so far!) 1.6 million events. Now producing at RHUL & Bristol. Manchester & Liverpool in ~2 weeks. Then QMUL & Brunel. 4 farms will produce 3-4 million a week. Sites cooperative (need to install BaBar Conditions Database which uses Objectivity) Major problem has been firewalls. Complicated interaction with all the communication and ports. Identifying the source has been hard.

BaBarGrid: GridPP10, CERN June3 2004Slide 4 / 16 What the others are doing Italians and Germans going full-blown LCG route Objectivity database through networked ams servers (need 1 server per ~30 processes) Otherwise assume BaBar environment available at remote hosts Our approaches will converge one day Meanwhile, they will try sending jobs to RAL, we will try sending jobs to Ferrara.

BaBarGrid: GridPP10, CERN June3 2004Slide 5 / 16 Future Keep production running. Test an LCG interface (RAL? Ferrara? Manchester Tier 2?) when we have the manpower. Will give more functionality and stability in the long-term. Smooth and streamline process

SLAC/BaBar Richard P. Mount SLAC May 20, : Data Distribution and The SRB

BaBarGrid: GridPP10, CERN June3 2004Slide 7 / 16 Client Disk Server Tape Server SLAC-BaBar Computing Fabric IP Network (Cisco) 120 dual/quad CPU Sun/Solaris 400 TB Sun FibreChannel RAID arrays 1500 dual CPU Linux 900 single CPU Sun/Solaris 25 dual CPU Sun/Solaris 40 STK 9940B 6 STK 9840A 6 STK Powderhorn over 1 PB of data Objectivity/DB object database + HEP-specific ROOT software (Xrootd) HPSS + SLAC enhancements to Objectivity and ROOT server code

BaBarGrid: GridPP10, CERN June3 2004Slide 8 / 16 BaBar Tier-A Centers A component of the Fall 2000 BaBar Computing Model Offer resources at the disposal of BaBar; Each provides tens of percent of total BaBar computing/analysis need; –50% of BaBar computing investment was in Europe in 2002, 2003 CCIN2P3, Lyon, France in operation for 3+ years; RAL, UK in operation for 2+ years INFN-Padova, Italy in operation for 2 years GridKA, Karlsruhe, Germany in operation for 1 year.

BaBarGrid: GridPP10, CERN June3 2004Slide 9 / 16 SLAC-PPDG Grid Team Richard Mount10%PI Bob Cowles10%Strategy and Security Adil Hasan50%BaBar Data Mgmt Andy Hanushevsky20%Xrootd, Security … Matteo Melani80%New hire Wilko Kroeger100%SRB data distribution Booker Bense80%Grid software installation Post Doc50%BaBar - OSG

BaBarGrid: GridPP10, CERN June3 2004Slide 10 / 16 Network/Grid Traffic

BaBarGrid: GridPP10, CERN June3 2004Slide 11 / 16 SLAC-BaBar-OSG BaBar-US has been: –Very successful in deploying Grid data distribution (SRB US-Europe) –Far behind BaBar-Europe in deploying Grid job execution (in production for simulation) SLAC-BaBar-OSG plan –Focus on achieving massive simulation production in US within 12 months –make 1000 SLAC processors part of OSG –Run BaBar simulation on SLAC and non-SLAC OSG resources

BaBarGrid: GridPP10, CERN June3 2004Slide 12 / 16 3: Distributed Analysis At GridPP9: Good news: Basic grid job submission system deployed and working (Alibaba / Gsub) with GANGA portal Bad news: Low take up because of Users uninterested Poor reliability

BaBarGrid: GridPP10, CERN June3 2004Slide 13 / 16 Since then… Janusz Improve portal Develop web-based version Alessandra Move to Tier 2 system manager post James Starts June 14 th Attended GridPP10 meeting Mike Give talk at IoP parallel session Write Abstract (accepted) for All Hands meeting Write Thesis Roger Submit Proforma 3 Complete quarterly progress report Revise Proforma 3 Advertise and recruit replacement post Negotiate on revised Proforma 3 Write Abstract (pending) for CHEP Submit JeSRP-1 Write contribution for J Phys G Grid article

BaBarGrid: GridPP10, CERN June3 2004Slide 14 / 16 Future two-point plan(1) James to review/revise/relaunch job submission system Work with UK Grid/SP team (short term) and Italian/German LCG system (long term) Improve reliability through core team of users on development system

BaBarGrid: GridPP10, CERN June3 2004Slide 15 / 16 Future two-point plan (2) RAL CPUs very heavily loaded by BaBar. Slow turnround stressed users Make significant CPU resources available to BaBar users only through the Grid Some of the new Tier 1/A resources All the Tier 2 (Manchester) resources And see that Grid certificate take-up grow! Drive Grid usage through incentive

BaBarGrid: GridPP10, CERN June3 2004Slide 16 / 16 Final Word Our problems today will be your problems tomorrow challenges