Bulk production of Monte Carlo

Slides:



Advertisements
Similar presentations
12th September 2002Tim Adye1 RAL Tier A Tim Adye Rutherford Appleton Laboratory BaBar Collaboration Meeting Imperial College, London 12 th September 2002.
Advertisements

FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
1 Data Storage MICE DAQ Workshop 10 th February 2006 Malcolm Ellis & Paul Kyberd.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Israel Cluster Structure. Outline The local cluster Local analysis on the cluster –Program location –Storage –Interactive analysis & batch analysis –PBS.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
Software Summary 1M.Ellis - CM23 - Harbin - 16th January 2009  Four very good presentations that produced a lot of useful discussion: u Online Reconstruction.
SOFTWARE & COMPUTING Durga Rajaram MICE PROJECT BOARD Nov 24, 2014.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Nick Brook Current status Future Collaboration Plans Future UK plans.
November 16, 2012 Seo-Young Noh Haengjin Jang {rsyoung, Status Updates on STAR Computing at KISTI.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
Sep 21, 20101/14 LSST Simulations on OSG Sep 21, 2010 Gabriele Garzoglio for the OSG Task Force on LSST Computing Division, Fermilab Overview OSG Engagement.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
Report on Installed Resource Capacity Flavia Donno CERN/IT-GS WLCG GDB, CERN 10 December 2008.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VO auger experience with large scale simulations on the grid Jiří Chudoba.
CMS: T1 Disk/Tape separation Nicolò Magini, CERN IT/SDC Oliver Gutsche, FNAL November 11 th 2013.
SAM Status Update Piotr Nyczyk LCG Management Board CERN, 5 June 2007.
CMS User Support and Beijing Site Xiaomei Zhang CMS IHEP Group Meeting March
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Compute and Storage For the Farm at Jlab
Grid Computing: Running your Jobs around the World
SuperB – INFN-Bari Giacinto DONVITO.
Bulk production of Monte Carlo
The EDG Testbed Deployment Details
Xiaomei Zhang CMS IHEP Group Meeting December
GridPP DIRAC Daniela Bauer & Simon Fayer.
MICE Computing and Software
MCproduction on the grid
StoRM: a SRM solution for disk based storage systems
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Belle II Physics Analysis Center at TIFR
Farida Naz Andrea Sciabà
ALICE FAIR Meeting KVI, 2010 Kilian Schwarz GSI.
lcg-infosites documentation (v2.1, LCG2.3.1) 10/03/05
Status and Prospects of The LHC Experiments Computing
Introduction to Grid Technology
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Submit BOSS Jobs on Distributed Computing System
CREAM-CE/HTCondor site
Job workflow Pre production operations:
Artem Trunov and EKP team EPK – Uni Karlsruhe
Developments in Batch and the Grid
Simulation use cases for T2 in ALICE
The INFN Tier-1 Storage Implementation
Status of MC production on the grid
Status of MC production on the grid
Universita’ di Torino and INFN – Torino
R. Graciani for LHCb Mumbay, Feb 2006
gLite Job Management Christos Theodosiou
The LHCb Computing Data Challenge DC06
Presentation transcript:

Bulk production of Monte Carlo MICE Collaboration Bulk production of Monte Carlo Dimitrije Maletic Institute of Physics, University of Belgrade MICE Project Board, 7th of March 2017.

MICE Project Board, 7th of March 2017. Outline Introduction MCproduction on the grid information pages Information about finished MCproductions Resources available for MCproduction Conclusions MICE Project Board, 7th of March 2017.

MICE Project Board, 7th of March 2017. Introduction (1/3) MCproduction is regularly discussed at Grid and Data Mover meetings, as a part of Software and Computing Infrastructure project. Talks are also given on Analysis meetings and MICE CMs. The MC production on the grid includes G4BeamLine simulation and G4 Cooling Channel simulations using MAUS. MAUS MC production, using already made G4BL libraries, was done upon request with no delays, since March last year. As of last month, the G4BeamLine production was restarted on the grid, and the grid production of Version 4 of the G4BeamLine libraries started. MICE Project Board, 7th of March 2017.

MICE MC production on the grid block schema Introduction (2/3) MICE MC production on the grid block schema Image by Henry Nebrensky MC production running on sites supporting MICE VO. MAUS or G4BL output (or replica) copied to Imperial SE for http access. Also, copy of aggregated output files should be on RAL T1 tape. MICE Project Board, 7th of March 2017.

MICE Project Board, 7th of March 2017. Introduction (3/3) The MC production on the grid starts with the request on the request page. The production manager (me) should be informed about the request. I discuss the request with Durga. Then I insert the entry about the MC production into the CDB, and submit the grid jobs. In order to be able to continue with the MC production, production manager has to have a valid grid certificate and to be included into MICE VOMS. The MCproduction is using the MAUS software installed on CVMFS on the grid. Necessary information for MC simulation are http/srm list of G4Beamline chunks, MAUS SW Version, and a simulation datacard details. MAUS accesses the CDB to get appropriate configuration and calibrations, defined by reading of the datacard. Each request/start of MCProduction, is tagged with unique MCSerialNumber (row number in CDB table). Information about MC production and output links are placed at the MCproduction page, linked from MICE Software home page. MICE Project Board, 7th of March 2017.

MCproduction on the grid information pages (1/2) Information about finished and running MCproductions on the grid: http://micewww.pp.rl.ac.uk/projects/analysis/wiki/MCProduction Information ( and also as examples ) of MCproduction Requests http://micewww.pp.rl.ac.uk/projects/analysis/wiki/MCProductionRequests Information about MCproduction requests entries in CDB You can check MCSerialNumber entries in CDB: http://147.91.87.158/cgi-bin/get_mcserial The scripts used for MC production on the grid are available on launchpad https://launchpad.net/mice-mc-batch-submission MICE Project Board, 7th of March 2017.

MCproduction on the grid information pages (2/2) Http access: http://gfe02.grid.hep.ph.ic.ac.uk:8301/Simulation/MCproduction/ 33 productions done till March 2016. New MCproductions. With MCSerialNumber entries in CDB Old MCproductions. WithNO MCSerialNumber entries in CDB, only in preprodcdb MICE Project Board, 7th of March 2017.

MICE Project Board, 7th of March 2017. Latest production: Number of outputs stored / time of day interval 48 17:40-49 135 17:50-59 56 18:00-09 135 18:10-19 132 18:20-29 89 18:30-39 138 18:40-49 138 18:50-59 67 19:00-09 2 19:10-19 8 19:20-29 2 19:30-39 3 19:40-49 4 19:50-59 2 20:00-09 1 20:20-29 1 20:30-39 1 20:40-49 Information about finished MCproductions Number of jobs started since March 2016. is 52811 Parallel running jobs : mean 45.5, maximum 1382. Storage space used for MCproduction is 642 GB Processing time for MCproduction not an issue. No Jobs | The Logging and Bookeeping Subsystem (LB) used by WMS 5632 https://lcglb01.gridpp.rl.ac.uk 5643 https://lcglb02.gridpp.rl.ac.uk 3185 https://svr024.gla.scotgrid.ac.uk 6926 https://wmslb01.grid.hep.ph.ic.ac.uk 31378 https://wmslb02.grid.hep.ph.ic.ac.uk No jobs started: 52811, parallel running jobs: mean 45.5 maximum 1382. Broj jobova - statistika: 322 45,45342 111,53472 14636 1 8 1382 MICE Project Board, 7th of March 2017.

Resources available for MCproduction Running Waiting Total Free Queue ------------------------------------------------------------------------------------------------------------------------------------------------- 0 0 0 216 arc-ce01.gridpp.rl.ac.uk:2811/nordugrid-Condor-grid3000M 0 0 0 208 arc-ce02.gridpp.rl.ac.uk:2811/nordugrid-Condor-grid3000M 0 0 0 212 arc-ce03.gridpp.rl.ac.uk:2811/nordugrid-Condor-grid3000M 0 0 0 185 arc-ce04.gridpp.rl.ac.uk:2811/nordugrid-Condor-grid3000M 0 0 0 728 ce-01.roma3.infn.it:8443/cream-pbs-fastgrid 0 0 0 728 ce-01.roma3.infn.it:8443/cream-pbs-grid ………… 0 0 0 6 ceprod08.grid.hep.ph.ic.ac.uk:8443/cream-sge-grid.q 0 0 0 140 cream2.ppgrid1.rhul.ac.uk:8443/cream-pbs-mice 0 0 1169 226 dc2-grid-21.brunel.ac.uk:2811/nordugrid-Condor-default 0 0 1080 77 dc2-grid-22.brunel.ac.uk:2811/nordugrid-Condor-default 0 0 38 14 dc2-grid-25.brunel.ac.uk:2811/nordugrid-Condor-default 0 0 1029 576 dc2-grid-26.brunel.ac.uk:2811/nordugrid-Condor-default 0 0 764 6 dc2-grid-28.brunel.ac.uk:2811/nordugrid-Condor-default 0 0 0 764 hepgrid2.ph.liv.ac.uk:2811/nordugrid-Condor-grid 0 0 0 461 heplnv146.pp.rl.ac.uk:2811/nordugrid-Condor-grid 4047 250 4297 985 svr009.gla.scotgrid.ac.uk:2811/nordugrid-Condor-condor_q2d 4036 244 4280 996 svr010.gla.scotgrid.ac.uk:2811/nordugrid-Condor-condor_q2d 4050 255 4305 982 svr011.gla.scotgrid.ac.uk:2811/nordugrid-Condor-condor_q2d 4048 270 4318 984 svr019.gla.scotgrid.ac.uk:2811/nordugrid-Condor-condor_q2d Status of available slots for grid jobs for MICE VO MICE VO View 27.02.2017 12:00 Total (Running/Waiting) Free 21281 18718 Free Used Reserved Free Used Reserved Tag SE Online Online Online Nearline Nearline Nearline 224333 833672 0 0 0 0 - dc2-grid-64.brunel.ac.uk ……. 106387 42797 0 0 0 0 - gfe02.grid.hep.ph.ic.ac.uk 51770 8225 0 0 0 0 - heplnx204.pp.rl.ac.uk …… 177832 82794 0 0 0 0 - se01.dur.scotgrid.ac.uk 12018 13338 0 0 0 0 - se2.ppgrid1.rhul.ac.uk 6589 4593 11183 0 0 0 - srm-mice.gridpp.rl.ac.uk 6589 4593 11183 0 0 0 MICE_MISC_TAPE srm-mice.gridpp.rl.ac.uk 6589 4593 11183 0 0 0 MICE_RECO srm-mice.gridpp.rl.ac.uk 6589 4593 11183 14314 15747 30061 - srm-mice.gridpp.rl.ac.uk 6589 4593 11183 14314 15747 30061 MICE_RAW_TAPE srm-mice.gridpp.rl.ac.uk 6589 4593 11183 6602 4002 10604 - srm-mice.gridpp.rl.ac.uk 6589 4593 11183 6602 4002 10604 MICE_RAW_TAPE2 srm-mice.gridpp.rl.ac.uk … Free 626526 GB Used 1254470 GB Available storage space for MICE VO Imperial SE Free 106387 GB Used 42797 GB At RAL (srm-mice only): Disk Used: 41.08 % 4,593 GB Disk total: 11,183 GB Tape used: 20,054 GB Tape to go to 300 TB MICE Project Board, 7th of March 2017.

MICE Project Board, 7th of March 2017. Conclusions Processed 33 MCproductions on the grid. Used 642 GB of storage space. Processing time for MCproduction not an issue. Available processing power and storage capacity are not a bottleneck for running more (much more than the order of magnitude) productions per year. My availability is not an issue. Permanent employment. Management of my institute and colleagues from my Laboratory are very happy that we are part of international collaboration on UK based experiment. Expecting/ready for increase of requests for MC production after finished February/March ISIS user run cycle. Using new G4BeamLine libraries and MAUS-v2.8.4 version on CVMFS. MICE Project Board, 7th of March 2017.

THANK YOU!