The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.

Slides:



Advertisements
Similar presentations
The Quantum Chromodynamics Grid James Perry, Andrew Jackson, Matthew Egbert, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Advertisements

B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
Overview of local security issues in Campus Grid environments Bruce Beckles University of Cambridge Computing Service.
Andrew McNab - Manchester HEP - 22 April 2002 EU DataGrid Testbed EU DataGrid Software releases Testbed 1 Job Lifecycle Authorisation at your site More.
Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
Job Submission Using PBSPro and Globus Job Commands.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
BaBarGrid: Some UK developments Roger Barlow Imperial College 13th September 2002.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 1, pp For educational use only.
The story of BaBar: an IT perspective Roger Barlow DESY 4 th September 2002.
1 Use of the European Data Grid software in the framework of the BaBar distributed computing model T. Adye (1), R. Barlow (2), B. Bense (3), D. Boutigny.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Joining the Grid Andrew McNab. 28 March 2006Andrew McNab – Joining the Grid Outline ● LCG – the grid you're joining ● Related projects ● Getting a certificate.
Workload Management Massimo Sgaravatto INFN Padova.
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
BaBar WEB job submission with Globus authentication and AFS access T. Adye, R. Barlow, A. Forti, A. McNab, S. Salih, D. H. Smith on behalf of the BaBar.
Riccardo Bruno INFN.CT Sevilla, Sep 2007 The GENIUS Grid portal.
DIRAC Web User Interface A.Casajus (Universitat de Barcelona) M.Sapunov (CPPM Marseille) On behalf of the LHCb DIRAC Team.
XHTML Introductory1 Linking and Publishing Basic Web Pages Chapter 3.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Robert Fourer, Jun Ma, Kipp Martin Copyright 2006 An Enterprise Computational System Built on the Optimization Services (OS) Framework and Standards Jun.
Computing and LHCb Raja Nandakumar. The LHCb experiment  Universe is made of matter  Still not clear why  Andrei Sakharov’s theory of cp-violation.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
Security monitoring boxes Andrew McNab University of Manchester.
EScience and Particle Physics Roger Barlow eScience showcase May 1 st 2007.
Dave Newbold, University of Bristol8/3/2001 UK Testbed 0 Sites Sites that have committed to TB0: RAL (R) Birmingham (Q) Bristol (Q) Edinburgh (Q) Imperial.
Review of Condor,SGE,LSF,PBS
…building the next IT revolution From Web to Grid…
Andrew McNab - Manchester HEP - 11 May 2001 Packaging / installation Ready to take globus from prerelease to release. Alex has prepared GSI openssh.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
A B A B AR InterGrid Testbed Proposal for discussion Robin Middleton/Roger Barlow Rome: October 2001.
VO Privilege Activity. The VO Privilege Project develops and implements fine-grained authorization to grid- enabled resources and services Started Spring.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Andrew McNabGrid in 2002, Manchester HEP, 7 Jan 2003Slide 1 Grid Work in 2002 Andrew McNab High Energy Physics University of Manchester.
11th November 2002Tim Adye1 Distributed Analysis in the BaBar Experiment Tim Adye Particle Physics Department Rutherford Appleton Laboratory University.
Using the ARCS Grid and Compute Cloud Jim McGovern.
Timeshared Parallel Machines Need resource management Need resource management Shrink and expand individual jobs to available sets of processors Shrink.
The GridPP DIRAC project DIRAC for non-LHC communities.
BaBar and the GRID Tim Adye CLRC PP GRID Team Meeting 3rd May 2000.
15 December 2000Tim Adye1 Data Distribution Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting 15 th December 2000.
Grid Workload Management (WP 1) Massimo Sgaravatto INFN Padova.
Manchester Computing Supercomputing, Visualization & eScience Seamless Access to Multiple Datasets Mike AS Jones ● Demo Run-through.
The GridPP DIRAC project DIRAC for non-LHC communities.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Antonio Fuentes RedIRIS Barcelona, 15 Abril 2008 The GENIUS Grid portal.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
Moving the LHCb Monte Carlo production system to the GRID
LHCb Grid Computing LHCb is a particle physics experiment which will study the subtle differences between matter and antimatter. The international collaboration.
Using an Object Oriented Database to Store BaBar's Terabytes
Grid Computing Software Interface
Presentation transcript:

The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy Physics experiment expanding our understanding of antimatter. Running at an accelerator in Stanford, it gathers data on the decay of the rare B particles Distribution Data is distributed across many sites, and it may be duplicated. Processing must be spread across available resources: we use dedicated CPU farms at 9 UK institutes, and also at Lyon, Karlsruhe, Bologna as well as Stanford. The CPU farms and disk arrays do not all ‘belong to BaBar’ but to the universities and research institutes, and are shared Step 1: Data Specification A physicist selects events of a particular type that they are interested in, The Data We have now recorded over 100 million B decays. These are real events, not simulated ones (though we have those too!) and hundreds more arrive every minute when the experiment is running. Each decay provides a lot of data – about 20 MB. This data is studied by a team of over 500 physicists from many countries, so the data and the computers to process it are distributed actross the world. There is a strong contingent from the UK Grid Technology Grid Technology is clearly the way forward. It provides a future system in which the user can specify the data they want to analyse and how they want to analyse it, and a job submission system that will locate the relevant files (choosing between alternatives if there are multiple copies) run the analysis jobs on those files using local available CPUs, retrieve the various outputs and combine them seamlessly before returning them to the user. Grid authorisation and authentification tools can enable BaBar collaborators to use facilities on any BaBar site without bureaucratic hindrances The Demonstrator The purpose of building the demonstrator was to see how much was possible today, using exiting tools, as a first and hopefully useful step towards a future system. The demonstrator runs through a web browser, presumably on the user’s desktop or laptop. Further software (such as globus) on this platform is not required as it was felt to be too restrictive. Step 2: Data Location A physicist selects events of a particular type that they are interested in, and uses a web browser to ask which sites have files of this type. Sites are given in order of preference. The physicist will probably want to use data at their local site and go to remote sites only for files that don’t exist local;l;y. The system handles this prioritisation., Step 3: Job submission Then they pres to ‘Go’ button to launch the job(s)

Step 6: Output retrieval The histograms produced by each job can be collected from all sites(by submitted jobs which tar and copy the files to a single file accessible by http).This automatically invokes the ROOT display program to run on the user’s platform and produce the physics output. Step 5: status is monitored The status of each individual job can be monitored (using the url caught earlier). When they are finished the output log file can be retrieved and inspected – though this is usually only necessary if something goes wrong. The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith How it works 2: The BaBar VO When a job is submitted to a remote site and has been authernticated, the grid name is checked against the site map list to ensure that this person is authorised to use BaBar resources. This list of authorised BaBar users - BaBar VO - is maintained semi-automatically, with minimal user action required. All BaBar users have an account at the central system at SLAC. Anyone intending to use the Grid has a certificate. If the user copies their grid certificate to their SLAC account space, then a cron job detects it and checks that it is on the ‘babar’ afs access control list. If so, the details are copied to the central VO list, maintained at Manchester. Another cron job copies this list to the map files of the sites involved. How it works 3: Dynamic accounts When a job has been authorised and authenticated, the name on the grid certificate is compared with a list of known users and userids at that system. If a match is found then the job is submitted under that userid. If no match is found then the user is allocated a userid from a pool (babar01, babar02…).. If the user has used this machine in the past then they will be allocated their previous account if possible. Otherwise the next one on the list is used. This provides the user with the abailty to run jobs on any site in BaBarGrid (currently about 10, eventually 50+) without getting an indivudal account at each site (and 500+ users x 50+ sites = bureaucracy!) It provides the system manager with an audit trail: if a job run by a particular pool account misbehaves (inadvertently or maliciously) then that can be liked to the real physical user. Step 4 The jobs are spooled to the remote sites The web engine submits the jobs to the remote sites using globus-job-submit. The necessary control files (of which there are many) are copied along with the job, once for each site. The url pointing to the job output and status information is caught for later use How it works 1: Authorisation The user must have done a grid-proxy-init on their platform They then upload the X509 certificate into the gridpp web engine The web engine is then able to use this certificate to authenticate the job submission (done with globus-job-submit)