Friday May 10th 2002 Alex Howard Imperial College LondonSlide 1 Experiences of Submitting UKDMC and LISA GEANT4 Jobs FIRST I should say thanks for the.

Slides:



Advertisements
Similar presentations
B A B AR and the GRID Roger Barlow for Fergus Wilson GridPP 13 5 th July 2005, Durham.
Advertisements

Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA NEMO Monte Carlo Application on the Grid R. Calcagno for the NEMO Collaboration.
Dark Matter Searches with Dual-Phase Noble Liquid Detectors Imperial HEP 1st Year Talks ‒ Evidence and Motivation ‒ Dual-phase Noble Liquid Detectors ‒
Physics Lists from a pedestrian ESA/ESTEC G4 meeting, october 2010 Plenary Session VI : “New physics model development and physics list” Marc Verderi LLR,
Batch Production and Monte Carlo + CDB work status Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
Herwig++ Particle Data1 Particle Data for Herwig++ Peter Richardson Durham University.
Oxford Jan 2005 RAL Computing 1 RAL Computing Implementing the computing model: SAM and the Grid Nick West.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
LBNE R&D Briefing May 12, 2014 LBNE R&D Briefing May 12, 2014 LArIAT and LBNE Jim Stewart LArIAT EPAG Chair BNL LBNE LARIAT-EPAG J. Stewart BNL T. Junk.
GPU Performance Prediction GreenLight Education & Outreach Summer Workshop UCSD. La Jolla, California. July 1 – 2, Javier Delgado Gabriel Gazolla.
James Cunha Job Submission for Babar Analysis James Werner Resources:
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Matthew Palmer, Cambridge University01/10/2015 First Use of the UK e-Science Grid Overview The Physics Experiences Looking forward Conclusions Matthew.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
20 October 2006Workflow Optimization in Distributed Environments Dynamic Workflow Management Using Performance Data David W. Walker, Yan Huang, Omer F.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
11/30/2007 Overview of operations at CC-IN2P3 Exploitation team Reported by Philippe Olivero.
Lessons for the naïve Grid user Steve Lloyd, Tony Doyle [Origin: 1645–55; < F, fem. of naïf, OF naif natural, instinctive < L nātīvus native ]native.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
The ALICE short-term use case DataGrid WP6 Meeting Milano, 11 Dec 2000Piergiorgio Cerello 1 Physics Performance Report (PPR) production starting in Feb2001.
G4MICE Status and Plans 1M.Ellis - CM24 - RAL - 31st May 2009  Firstly, a correction to the agenda:  I failed to spot a mistake in the agenda that I.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Overview of STEP09 monitoring issues Julia Andreeva, IT/GS STEP09 Postmortem.
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
Monday June 3rd 2002 AIDA 2.2 WorkshopAlex Howard Imperial College London Slide 1 Implementation of AIDA within Geant4 for Underground and Space Applications.
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
1 Performance of a Magnetised Scintillating Detector for a Neutrino Factory Scoping Study Meeting Rutherford Appleton Lab Tuesday 25 th April 2006 M. Ellis.
Introduction What is detector simulation? A detector simulation program must provide the possibility of describing accurately an experimental setup (both.
Alex Howard, ETH, Zurich 13 th September 2012, 17 th Collaboration Meeting, Chartres 1 Geometrical Event Biasing Facility Alex Howard ETH, Zurich Geometrical.
MaGe framework for Monte Carlo simulations MaGe is a Geant4-based Monte Carlo simulation package dedicated to experiments searching for 0 2  decay of.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Overview Background: the user’s skills and knowledge Purpose: what the user wanted to do Work: what the user did Impression: what the user think of Ganga.
Upgrade Software University and INFN Catania Upgrade Software Alessia Tricomi University and INFN Catania CMS Trigger Workshop CERN, 23 July 2009.
OPTIMIZATION OF DIESEL INJECTION USING GRID COMPUTING Miguel Caballer Universidad Politécnica de Valencia.
Physics Performance. EM Physics: Observations Two apparently independent EM physics models have led to user confusion: –Different results for identical.
Luciano Pandola, INFN Gran Sasso Luciano Pandola INFN Gran Sasso Genova, July 18 th, 2005 Geant4 and the underground physics community.
1 Giuseppe G. Daquino 26 th January 2005 SoFTware Development for Experiments Group Physics Department, CERN Background radiation studies using Geant4.
Jiri Chudoba for the Pierre Auger Collaboration Institute of Physics of the CAS and CESNET.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
L. Perini DATAGRID WP8 Use-cases 19 Dec ATLAS short term grid use-cases The “production” activities foreseen till mid-2001 and the tools to be used.
ZEPLIN III Position Sensitivity PSD7, 12 th to 17 th September 2005, Liverpool, UK Alexandre Lindote LIP - Coimbra, Portugal On behalf of the ZEPLIN/UKDM.
Alex Howard, CERN Slide 1 Simulating Dark Matter Detectors (a.k.a. DMX Underground Advanced Example) 1.Dark Matter detectors 2.Implementation within Geant4.
18 May 2006CCGrid2006 Dynamic Workflow Management Using Performance Data Lican Huang, David W. Walker, Yan Huang, and Omer F. Rana Cardiff School of Computer.
GDB Meeting CERN 09/11/05 EGEE is a project funded by the European Union under contract IST A new LCG VO for GEANT4 Patricia Méndez Lorenzo.
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
Alex Howard, Imperial College Slide 1 July 2 nd 2001 Underground Project UNDERGROUND PROJECT – Overview and Goals Alex Howard Imperial College, London.
User view Ganga classes and functions can be used interactively at a Python prompt, can be referenced in scripts, or can be used indirectly via a Graphical.
A novel approach to visualizing dark matter simulations
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
24 th October 2002 Alex Howard, Imperial College, London 8 th TopIcal Seminar on Innovative Radiation and Particle Detectors Slide1 Simulation and Analysis.
Geant4 Simulation of Test-Mass Charging in the LISA Mission
Eleonora Luppi INFN and University of Ferrara - Italy
Work report Xianghu Zhao Nov 11, 2014.
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Introduction to Operating System (OS)
CompChem VO: User experience using MPI
US CMS Testbed.
Simulation and Analysis for Astroparticle Experiments
Performance And Scalability In Oracle9i And SQL Server 2000
Advanced Examples Alex Howard, Imperial College, UK
Presentation transcript:

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 1 Experiences of Submitting UKDMC and LISA GEANT4 Jobs FIRST I should say thanks for the opportunity of using the growing resource of the GRID particularly for Particle Astrophysics (LISA and UKDMC) which has no direct funding to (or from) the GRIDPP It is increasingly clear we NEED use of the GRID in order to carry out accurate simulations for both LISA and Dark Matter – either to investigate signals or possible backgrounds on micro and macroscopic levels. Specifically this is processor intensive work I am a novice – only 5 weeks usage of the GRID

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 2 Experiences of Submitting UKDMC and LISA GEANT4 Jobs 1.Experimental configuration a)Dark Matter b)LISA 2.Output of >100 Jobs 3.Benefits 4.Comments on Functionality and UI

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 3 UKDMC Experiment

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 4 Two-Phase Liquid Xenon, ZEPLIN III ZEPLIN III, our near future detector will offer extreme levels of signal-to-background discrimination. Interpretation will only be possible with full Monte Carlo studies (a novelty for DM)

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 5 Prototype Simulation: Full Lab Geometry

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 6 GEANT 4 Due to the need to develop Monte Carlo simulations for Dark Matter experiments, I have become involved in the development of Geant4 – particularly exploiting the low energy, radioactivity decay and neutron extensions of the toolkit (see advanced example DMX within the release package) From basic simulations of our prototype system it is clear that greater computing power is required in order to produce high statistics and accurately model our detectors. In addition it is envisaged to develop a simulation of the underground environment, UNEX, to give accurate spectra and particle types within the experimental area. Furthermore, Imperial is also involved in LISA a gravitational wave experiment, where charging of proof masses becomes critical. To simulate the charging rate requires large number of cosmic events with rare hadronic showers resulting in residual charge

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 7 One High Energy event: LXe GXe PMT mirror source

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 8 roomelastic inelasticoutside Neutron s

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 9 LISA/STEP Gravitational wave experiment STEP = test of the equivalence principle Both rely on floating proof masses with no electrical connections  prone to charging effects from cosmic rays However, charging rate is relatively low (~1 in 5000 particles)

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 10 LISA Geometry and Geant4 Images

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 11 LISA Geometry and Geant4 Images

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 12 OUTPUT from Grid Running Over 100 jobs have been run on the grid to try and estimate the charging rate in the proof mass and test different cuts and processes within Geant million events have been run in ~300 hours of CPU time The preliminary outcome is as follows:

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 13 6secs and Convergence of Charging Rate

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 14 6secs and Convergence of Charging Rate Initial indications of charging rate

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 15 GRID JOB SUBMISSION – My Experience With the auspicious title of DataGrid User….. After running >100 jobs I have had some experience of running of jobs on the DataGrid – mostly good However, there are a few things that if implemented would be useful, although their unavailability may just be the youthful nature of the GRID and therefore already present, at least in the design… (some of which are due to my ignorance)

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 16 Things Missing, apparently (1) 1.Status of job  events run, nearcompletion? 2.Run-time Partial grab of output files  check on job: RB release 1.4 (Dave’s talk) 3.Length of identifier – cumbersome 4.Saving identifiers to file  ease of management of many jobs 5.Request output to be saved to file  automatically when job completed 6.Proxy expiration and file loss – can protect against it, but can occur 7.File back up – prevent losses when things crash, and therefore reduce number of repeat jobs 8.Job clearing and file clearing – particularly if job crashes/disappears

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 17 Things Missing, apparently (2) 10.Diagnostics  memory usage (single event leaks) – max/average  CPU Time – can access at runtime with Globus  Disc access = efficiency of local staging, etc… 11.Forced killing of jobs? Clearing of files or keeping of partial files – cancelling jobs loses everything 12.Run time limit/Disk Usage/Memory Usage  in case of problems/diagnostic? 14.Node limit  Batch script to run jobs sequentially without clogging up the farm – from proxy-request? 15.Shared Disc for data? – Input files are ~500 Mbytes and copied 32 times…

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 18 Things Missing, apparently (3) 16.What decides resource management? Queued at IC or at RAL – speed of processor? Disc transfer time? 17.Jobs cleared before get output (RB dies) 18.Housekeeping/cleaning of tmp files 19.Script to save output to your account without 3 rd party access? 20.Prone to abuse? 21.Tidy up – clear up dangling jobs and tmp files for a given user

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 19 Things Missing, apparently (4) 22.Reliability? 23.Compilers and inter-site homogeneity  IC = egcs whereas RAL = gcc Level of resource available and average usage/users/CPU power – stage requests, think about optimising problem, look elsewhere 25.More nodes with my VO would naturally be helpful IC = 16 nodes (user limit 14) RAL = 8 nodes elsewhere? Apologies if some of these comments are due to me…

Friday May 10th 2002 Alex Howard Imperial College LondonSlide 20 Conclusions The GRID is clearly a very powerful resource that has enabled me to run a lot of jobs in a very short space of time It is clear that Dark Matter and Spacecraft charging studies at this time NEED the GRID - particularly for accurate Monte Carlo simulations of future detectors (ZEPLIN III) and Spacecraft charging rates (LISA/STEP) In running jobs some things could perhaps be more elegant/convenient to use, but on the whole it is not too difficult