11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002.

Slides:



Advertisements
Similar presentations
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
13th November 2002Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting University of Bristol 13 th November.
Exporting Raw/ESD data from Tier-0 Tier-1s Wrap-up.
13 December 2000Tim Adye1 New KanGA Export Scheme Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Data Distribution Session 13 th December.
22nd January 2003Tim Adye1 Summary of Bookkeeping discussions at RAL Workshop Tim Adye Rutherford Appleton Laboratory Kanga Phone Meeting 22 nd January.
Wahid Bhimji Andy Washbrook And others including ECDF systems team Not a comprehensive update but what ever occurred to me yesterday.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
The story of BaBar: an IT perspective Roger Barlow DESY 4 th September 2002.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
25 February 2000Tim Adye1 Using an Object Oriented Database to Store BaBar's Terabytes Tim Adye Particle Physics Department Rutherford Appleton Laboratory.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
2nd April 2001Tim Adye1 Bulk Data Transfer Tools Tim Adye BaBar / Rutherford Appleton Laboratory UK HEP System Managers’ Meeting 2 nd April 2001.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
CMS Report – GridPP Collaboration Meeting V Peter Hobson, Brunel University16/9/2002 CMS Status and Plans Progress towards GridPP milestones Workload management.
Data Distribution and Management Tim Adye Rutherford Appleton Laboratory BaBar Computing Review 9 th June 2003.
Personal Computer - Stand- Alone Database  Database (or files) reside on a PC - on the hard disk.  Applications run on the same PC and directly access.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
19th September 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Royal Holloway 19 th September 2003.
25th October 2006Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar UK Physics Meeting Queen Mary, University of London 25 th October 2006.
BaBar and the Grid Roger Barlow Dave Bailey, Chris Brew, Giuliano Castelli, James Werner, Fergus Wilson and Will Roethel GridPP18 Glasgow March 20 th 2007.
Hepix LAL April 2001 An alternative to ftp : bbftp Gilles Farrache In2p3 Computing Center
26 September 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 26 th September 2000.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
CMS Computing Model Simulation Stephen Gowdy/FNAL 30th April 2015CMS Computing Model Simulation1.
SLACFederated Storage Workshop Summary For pre-GDB (Data Access) Meeting 5/13/14 Andrew Hanushevsky SLAC National Accelerator Laboratory.
UK Tier 1 Centre Glenn Patrick LHCb Software Week, 28 April 2006.
11th November 2002Tim Adye1 Distributed Analysis in the BaBar Experiment Tim Adye Particle Physics Department Rutherford Appleton Laboratory University.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
LHCbComputing LHCC status report. Operations June 2014 to September m Running jobs by activity o Montecarlo simulation continues as main activity.
11th April 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Liverpool 11 th April 2003.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
CPU on the farms Yen-Chu Chen, Roman Lysak, Miro Siket, Stephen Wolbers June 4, 2003.
15 December 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 15 th December 2000.
Computing Operations Report 29 Jan – 7 June 2015 Stefan Roiser NCB 8 June 2015.
Victoria, Sept WLCG Collaboration Workshop1 ATLAS Dress Rehersals Kors Bos NIKHEF, Amsterdam.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
LHCb Computing activities Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group.
1 June 11/Ian Fisk CMS Model and the Network Ian Fisk.
I/O and Metadata Jack Cranshaw Argonne National Laboratory November 9, ATLAS Core Software TIM.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
BaBar-Grid Status and Prospects
GridPP10 Meeting CERN June 3 rd 2004
Belle II Physics Analysis Center at TIFR
Jan 12, 2005 Improving CMS data transfers among its distributed Computing Facilities N. Magini CERN IT-ES-VOS, Geneva, Switzerland J. Flix Port d'Informació.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
3.2 Virtualisation.
Universita’ di Torino and INFN – Torino
Grid Canada Testbed using HEP applications
DØ MC and Data Processing on the Grid
Using an Object Oriented Database to Store BaBar's Terabytes
Kanga Tim Adye Rutherford Appleton Laboratory Computing Plenary
Proposal for a DØ Remote Analysis Model (DØRAM)
The LHCb Computing Data Challenge DC06
Presentation transcript:

11th September 2002Tim Adye1 BaBar Experience Tim Adye Rutherford Appleton Laboratory PPNCG Meeting Brighton 11 th September 2002

Tim Adye2 Talk Outline New (more distributed) BaBar computing model and its impact, especially on RAL User experience Transfers SLAC -> RAL Transfers UK -> SLAC

11th September 2002Tim Adye3 New Computing Model Goal is to spread computing load much more around the collaboration Simulation production is already highly distributed Small-scale analysis already performed at Universities (9 in UK) and Regional Centres (eg. RAL) Now have three new “Tier A” centres Lyon – Objectivity (database) analysis (since last year) RAL – Kanga (ROOT MicroDST) analysis (from May 02) Padova – Reprocessing (commissioning) RAL has relieved SLAC of all Kanga analysis Each site requires large data transfers from and to SLAC

11th September 2002Tim Adye4 BaBar CPU Usage at RAL Tier A

11th September 2002Tim Adye5 User Experience Now have users throughout US and Europe Interactive experience is generally excellent “ Connecting to RAL and working at RAL was very fast, as fast as at SLAC. ” – Uriel Nauenberg, University of Colorado at Boulder. AFS access between UK and SLAC is still slow SLAC -> RAL AFS, RAL+UK -> SLAC AFS Maybe an intrinsic property of AFS with slower RTT

11th September 2002Tim Adye6 Bulk transfers SLAC -> RAL Production at SLAC has come in bursts Reprocessing old data Kanga production is the final (and relatively simple) step in a long processing chain We have had no problem keeping up with steady state Sometimes delays of 1-2 weeks to catch up with a burst in production This is quite acceptable So far, >19 TB copied from SLAC and on disk at RAL 15 TB since January RAL is now the primary Kanga repository, so others in UK/Europe/US will copy from us So far modest: most << 1TB per site (10-20 sites)

11th September 2002Tim Adye7 Kanga Data transfers bbftp SLAC -> RAL

11th September 2002Tim Adye8 Transfer Rate per bbftp session (2-20 streams each)

11th September 2002Tim Adye9 Bandwidth Transfer rate of 5-25 Mbit/s is obviously much less than SLAC RAL 622 Mbit/s link Actually get up to 50 Mbit/s by using multiple sessions (on different servers) Probably several effects limit us Only run a few (1-5) simultaneous sessions More is cumbersome to manage Currently use only a couple of 100 Mbit/s servers at RAL Firewall problems with bbftp limit servers we can use Will soon have dedicated import/export Gbit servers bbftp doesn’t handle small files very efficiently Typical file sizes MB Will use bbcp, but requires a bug fix (on Andy’s list) Not a problem at the moment

11th September 2002Tim Adye10 Transfers UK -> SLAC UK now performs ~75% of total BaBar Simulation Production Mostly at University sites, though RAL is starting up Objectivity output files sent back to SLAC Size reduced by a factor of 8 after dropping intermediate files no longer required for most analysis Output then skimmed, converted, and re-exported to Tier A and C sites

140 TB Transferred To SLAC ! 1 Aug 01 1 Sep 02 Raw/sim not since Apr02 rate drop: x8 8 TB 4 TB 10 TB 20 TB By week By site Simulation Transfers to SLAC (all sites)

11th September 2002Tim Adye12 Transfer rate limited by local+SLAC infrastructure and number of bbftp streams (currently 3) Gaps in transfers show we are keeping up – CPU is main limit Full plots at Transfer rate UK Simulation Production A typical UK site

11th September 2002Tim Adye13 Summary New computing model is heavily reliant on network Especially UK to/from SLAC User experience is good but is there anything we can do about AFS? Bulk transfer from and to SLAC currently limited by local infrastructure Nevertheless, we are easily keeping up with SLAC and UK farm production