Moving the LHCb Monte Carlo production system to the GRID

Slides:



Advertisements
Similar presentations
WP2: Data Management Gavin McCance University of Glasgow November 5, 2001.
Advertisements

WP2: Data Management Gavin McCance University of Glasgow.
ATLAS/LHCb GANGA DEVELOPMENT Introduction Requirements Architecture and design Interfacing to the Grid Ganga prototyping A. Soroko (Oxford), K. Harrison.
LHCb(UK) Meeting Glenn Patrick1 LHCb Grid Activities in UK LHCb(UK) Meeting Cambridge, 10th January 2001 Glenn Patrick (RAL)
LHCb Computing Activities in UK Current activities UK GRID activities RICH s/w activities.
Andrew McNab - Manchester HEP - 2 May 2002 Testbed and Authorisation EU DataGrid Testbed 1 Job Lifecycle Software releases Authorisation at your site Grid/Web.
NIKHEF Testbed 1 Plans for the coming three months.
6/4/20151 Introduction LHCb experiment. LHCb experiment. Common schema of the LHCb computing organisation. Common schema of the LHCb computing organisation.
Production Planning Eric van Herwijnen Thursday, 20 june 2002.
Workload Management Massimo Sgaravatto INFN Padova.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Application Use Cases NIKHEF, Amsterdam, December 12, 13.
11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production
Exploiting the Grid to Simulate and Design the LHCb Experiment K Harrison 1, N Brook 2, G Patrick 3, E van Herwijnen 4, on behalf of the LHCb Grid Group.
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
K.Harrison CERN, 21st November 2002 GANGA: GAUDI/ATHENA AND GRID ALLIANCE - Background and scope - Project organisation - Technology survey - Design -
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
LHCb Applications and GRID Integration Domenico Galli Catania, April 9, st INFN-GRID Workshop.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
SLICE Simulation for LHCb and Integrated Control Environment Gennady Kuznetsov & Glenn Patrick (RAL) Cosener’s House Workshop 23 rd May 2002.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
1 The DataGrid WorkPackage 8 F.Carminati 28 June 2001.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Nick Brook Current status Future Collaboration Plans Future UK plans.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
LHCb and DataGRID - the workplan for 2001 Eric van Herwijnen Wednesday, 28 march 2001.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
K.Harrison BNL, 7 May 2002 Overview of GANGA – First steps towards GANGA – Outline of required functionality – Some existing technology – Conclusions.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
LHCb planning for DataGRID testbed0 Eric van Herwijnen Thursday, 10 may 2001.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
29 May 2002Joint EDG/WP8-EDT/WP4 MeetingClaudio Grandi INFN Bologna LHC Experiments Grid Integration Plans C.Grandi INFN - Bologna.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
Giuseppe Codispoti INFN - Bologna Egee User ForumMarch 2th BOSS: the CMS interface for job summission, monitoring and bookkeeping W. Bacchi, P.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
ATLAS-specific functionality in Ganga - Requirements for distributed analysis - ATLAS considerations - DIAL submission from Ganga - Graphical interfaces.
LHCb Data Challenge in 2002 A.Tsaregorodtsev, CPPM, Marseille DataGRID France meeting, Lyon, 18 April 2002.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
ATLAS Physics Analysis Framework James R. Catmore Lancaster University.
14 June 2001LHCb workshop at Bologna1 LHCb and Datagrid - Status and Planning F Harris(Oxford)
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
Claudio Grandi INFN Bologna Workshop congiunto CCR e INFNGrid 13 maggio 2009 Le strategie per l’analisi nell’esperimento CMS Claudio Grandi (INFN Bologna)
LHCb computing model and the planned exploitation of the GRID Eric van Herwijnen, Frank Harris Monday, 17 July 2000.
Workload Management Workpackage
BOSS: the CMS interface for job summission, monitoring and bookkeeping
Gridifying the LHCb Monte Carlo simulation system
BOSS: the CMS interface for job summission, monitoring and bookkeeping
D. Galli, U. Marconi, V. Vagnoni INFN Bologna N. Brook Bristol
Stephen Burke, PPARC/RAL Jeff Templon, NIKHEF
The Ganga User Interface for Physics Analysis on Distributed Resources
LHCb Distributed Computing and the Grid V. Vagnoni (INFN Bologna)
Alice Software Demonstration
Gridifying the LHCb Monte Carlo production system
LHCb thinking on Regional Centres and Related activities (GRIDs)
Status and plans for bookkeeping system and production tools
Short to middle term GRID deployment plan for LHCb
Use Of GAUDI framework in Online Environment
Production Manager Tools (New Architecture)
The LHCb Computing Data Challenge DC06
Presentation transcript:

Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol E.van Herwijnen,P.Mato CERN A.Khan Edinburgh M.McCubbin,G.D.Patel Liverpool A.Tsaregorodtsev Marseille H.Bulten,S.Klous Nikhef F.Harris Oxford G.N.Patrick,R.A.Sansum RAL 3 Sept 2001 F HARRIS CHEP, Beijing

Overview of presentation Functionality and distribution of the current system Experience with the use of Globus in tests and production Requirements and planning for the use of DataGrid middleware and security system Planning for interfacing GAUDI software framework to GRID services Conclusions 3 Sept 2001 F HARRIS CHEP, Beijing

LHCb distributed computing environment (15 countries - 13 European + Brazil,China, 50 institutes) Tier-0 CERN Tier-1 RAL(UK),IN2P3(Lyon),INFN(Bologna),Nikhef,CERN + ? Tier-2 Liverpool,Edinburgh/Glasgow,Switzerland + ? (maybe grow to ~10) Tier-3 ~50 throughout the collaboration Ongoing negotations for centres (Tier-1/2/3) Germany,Russia,Poland,Spain,Brazil Current GRID involvement DataGrid (and national GRID efforts in UK,Italy,+..) Active in WP8 (HEP Applications) of Datagrid Will use middleware(WP 1-5) + Testbed(WP6) + Network(WP7) + Security tools 3 Sept 2001 F HARRIS CHEP, Beijing

Current MC production facilities The max # of CPUs used simultaneously is usually less than the capacity of the farm. Will soon extend to Nikhef, Edinburgh, Bristol 3 Sept 2001 F HARRIS CHEP, Beijing

Distributed MC production, today Submit jobs remotely via Web Transfer data to CASTOR mass-store at CERN Update bookkeeping database (Oracle at CERN) Execute on farm Data Quality Check on data stored at CERN Monitor performance of farm via Web 3 Sept 2001 F HARRIS CHEP, Beijing

Distributed MC production in future (using DataGRID middleware) WP 1 job submission tools WP 4 environment WP 2 data replication WP 5 API for mass storage Submit jobs remotely via Web Transfer data to CASTOR (and HPSS, RAL Datastore) Execute on farm WP 1 job submission tools Update bookkeeping database WP 2 meta data tools WP1 tools WP 3 monitoring tools Online histogram production using GRID pipes Data Quality Check ‘Online’ Monitor performance of farm via Web 3 Sept 2001 F HARRIS CHEP, Beijing

Use of Globus in tests and production Use of Globus simplifies remote production submit jobs through local Globus commands rather than remote logon Some teething problems in tests(some due to learning curve) Some limitations to the system (e.g. need large temporary space for running jobs) Some mismatches between Globus and the PBS batch system (job parameters ignored, submitting >100 jobs give problems) DataGrid testbed organisation will ensure synchronisation of versions at sites + Globus support 3 Sept 2001 F HARRIS CHEP, Beijing

Security M9(October 2001...) M21(October2002….) Authorisation group working towards tool providing single log-on and single role for individual Individual will get certificate from national CA Must work out administration for this at start for experiment VO. Probably ~10 users for LHCb M21(October2002….) Single log-on firmly in place. Moved to structured VO with (group,individual) authorisation. Multiple roles Maybe up to ~50 users 3 Sept 2001 F HARRIS CHEP, Beijing

Job Submission M9 Use command line interface to WP1 JDL. ‘Static’ file specification. Use environment specification as agreed with WP1,4 (no cloning) M21 Interface to WP1 Job Options via LHCb application (GANGA). Dynamic ‘file’ environment according to application navigation May require access to query language tools to metadata More comprehensive environment specification 3 Sept 2001 F HARRIS CHEP, Beijing

Job Execution M9 Will run on farms at CERN, Lyon, RAL for first tests Extend to Nikhef, Bologna, Edinburgh once we get stability Will use a very simple environment (binaries) ‘Production’ flavour for work M21 Should be running on many sites (? 20) Complete LHCb environment for production and development, without AFS (use WP1 ‘sandboxes’) Should be testing user analysis via GRID, as well as performing production(~50) 3 Sept 2001 F HARRIS CHEP, Beijing

Job Monitoring and data quality checking Monitor farms with home-grown tools via Web Use home-grown data histogramming tools for data monitoring M21 Integrate WP3 tools for farm performance (status of jobs) Combine LHCb ideas on state management and data quality checking with DataGrid software 3 Sept 2001 F HARRIS CHEP, Beijing

Bookkeeping database M9 M21 Use current CERN-centric Oracle based system M21 Moved to WP2 metadata handling tools ? ( ? Use of LDAP, Oracle) This will be distributed database handling using facilities of replica catalogue and replica management LHCb must interface applications view (metadata) to GRID tools. ?query tools availability 3 Sept 2001 F HARRIS CHEP, Beijing

Data copying and mass storage handling WP2 GDMP tool via command line interface to transfer Zebra format files(control from LHCb scripts) WP5 interface to CASTOR M21 GDMP will be replaced by smaller tools with API interface. Copy Zebra +Root + ? Tests of strategy driven copying via replica catalogue and replica management WP5 interfaces to more mass storage devices. (HPSS+RAL Datastore) 3 Sept 2001 F HARRIS CHEP, Beijing

Gaudi Architecture Converter Application Manager Converter Converter Event Selector Transient Event Store Data Files Message Service Persistency Service Event Data Service JobOptions Service Algorithm Algorithm Algorithm Data Files Transient Detector Store Particle Prop. Service Persistency Service Detec. Data Service Other Services Data Files Transient Histogram Store Persistency Service Histogram Service 3 Sept 2001 F HARRIS CHEP, Beijing

GAUDI services linking to external services DataSet DB OS Job Service Mass Storage Monitoring Service Converter Algorithm Event Data Service Persistency Detec. Data Message JobOptions Particle Prop. Other Services Histogram Application Manager Event Selector Transient Transient Config. Service Transient Event Store Transient Detector Store Event Database PDG Database Transient Histogram Store Analysis Program Other Other Histo Presenter 3 Sept 2001 F HARRIS CHEP, Beijing

Another View Gaudi Domain Grid Domain Algorithms Gaudi Services API Gaudi Services API Application external Services Grid Domain 3 Sept 2001 F HARRIS CHEP, Beijing

GANGA: Gaudi ANd Grid Alliance GUI GANGA Collective & Resource Grid Services Histograms Monitoring Results JobOptions Algorithms GAUDI Program 3 Sept 2001 F HARRIS CHEP, Beijing

Conclusions http://lhcb-comp.web.cern.ch/lhcb-comp/ LHCb already has distributed MC production using GRID facilities for job submission Will test DataGrid M9 (Testbed1) deliverables in an incremental manner from October 15 using tools from WP1-5 Have commenced defining projects to interface software framework (GAUDI) services (Event Persistency, Event Selection, Job Options) to GRID services Within the WP8 structure we will work closely with the other work packages (middleware,testbed,network) in a cycle of (requirements analysis, design, implementation,testing) http://lhcb-comp.web.cern.ch/lhcb-comp/ http://datagrid-wp8.web.cern.ch/DataGrid-WP8/ 3 Sept 2001 F HARRIS CHEP, Beijing