Presentation is loading. Please wait.

Presentation is loading. Please wait.

11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production

Similar presentations


Presentation on theme: "11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production"— Presentation transcript:

1 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production http://lhcb-comp.web.cern.ch/lhcb-comp/grid/Default.htm F Harris(Oxford) E van Herwijnen(CERN) G Patrick(RAL)

2 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 2 Overview of presentation The LHCb distributed MC production system Where can GRID technology help? - our requirements Current production centres and GRID Testbeds

3 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 3 LHCb working production system (and forward look to putting in GRID..) Generate events write log to Web (globus-run) copy to mass store Globus-rcp,gsi-ftp call servlet (at CERN) mass store (e.g. RAL Atlas data store, CERN shift system) Construct job script and submit via Web (remote or local at CERN) (GRID Certification) Find next free tape-slot call servlet to copy data from mass store to tape at CERN update bookkeeping db (Oracle) Get token on shd18 (Certification) copy data to shift copy data to tape (gsi-ftp) CERN or remote CERN only

4 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 4 Problems of production system Main issue: We are forced to copy all data back to CERN Reasons for this: Standard cataloguing tools do not exist - so we cannot keep track of the data where it is produced Absence of smart analysis job-submission tools that move executables to where the input data is stored Steps that make the production difficult: Authorisation (jobs can be submitted only from trusted machines) Copy data (generated both inside & outside CERN) into the CERN mass store (many fragile steps) Updating of the bookkeeping database at CERN (Oracle interface is non standard)

5 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 5 Where can the GRID help? Very transparent way of authorizing users on remote computers data set cataloguing tools (LHCb has expertise and is willing to share experience) to avoid unnecessary replication if replication is required, provide fast and reliable tools analysis job submission tools interrogate the data set catalogue and specify where the job should be run; (the executable may need to be sent to the data) read different datasets from different sites into interactive application standard/interface for submitting/monitoring production jobs on any node on the GRID

6 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 6 Current and ‘imminent’ production centres CERN Samples (several channels) for Trigger TDR on PCSF (~10**6 events) RAL 50k 411400 (Bd-> J/psi K (e+e-), DST2, for Trigger 250k inclusive bbar + 250k mbias RAWH, DST2, no cuts Liverpool 2 million MDST2 events after L0 and L1 cuts Lyon plan to do 250 k inclusive bb-bar events without cuts (January) Nikhef and Bologna Will generate samples for Detector and Trigger studies (? Mar/April)

7 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 7 RAL CSF 120 Linux cpu IBM 3494 tape robot LIVERPOOL MAP 300 Linux cpu CERN pcrd25.cern.ch lxplus009.cern.ch RAL (PPD) Bristol Imperial College Oxford GLASGOW/ EDINBURGH “Proto-Tier 2” Initial LHCb-UK GRID “Testbed” Institutes Exists Planned RAL DataGrid Testbed

8 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 8 Initial Architecture Based around existing production facilities (separate Datagrid testbed facilities will eventually exist). Intel PCs running Linux Redhat 6.1 Mixture of batch systems (LSF at CERN, PBS at RAL, FCS at MAP). Globus 1.1.3 everywhere. Standard file transfer tools (eg. globus-rcp, GSIFTP). GASS servers for secondary storage? Java tools for controlling production, bookkeeping, etc. MDS/LDAP for bookkeeping database(s).

9 11 Dec 2000F Harris Datagrid Testbed meeting at Milan 9 Other LHCb countries(and institutes) developing Tier1/2/3 centres and GRID plans Germany,Poland,Spain,Switzerland,Russia –see talk at WP8 meeting of Nov 16 Several institutes have installed Globus, or are about to (UK institutes,Clermont Ferrand,Marseille,Bologna,Santiago……)

10 10 MAN SuperJANET Backbone SuperJANET III 155 Mbit/s (SuperJANET IV 2.5Gbit/s) London RAL Campus Univ. Dept MAN Networking Bottlenecks? CERN 100 Mbit/s 34 Mbit/s 622 Mbit/s (March 2001) TEN-155 Need to study/measure for data transfer and replication within UK and to CERN. 622 Mbit/s Schematic only 155 Mbit/s


Download ppt "11 Dec 2000F Harris Datagrid Testbed meeting at Milan 1 LHCb ‘use-case’ - distributed MC production"

Similar presentations


Ads by Google